Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (726)

Search Parameters:
Keywords = coherent radar

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 23870 KiB  
Article
Utilizing LuTan-1 SAR Images to Monitor the Mining-Induced Subsidence and Comparative Analysis with Sentinel-1
by Fengqi Yang, Xianlin Shi, Keren Dai, Wenlong Zhang, Shuai Yang, Jing Han, Ningling Wen, Jin Deng, Tao Li, Yuan Yao and Rui Zhang
Remote Sens. 2024, 16(22), 4281; https://doi.org/10.3390/rs16224281 - 17 Nov 2024
Viewed by 359
Abstract
The LuTan-1 (LT-1) satellite, launched in 2022, is China’s first L-band full-polarimetric Synthetic Aperture Radar (SAR) constellation, boasting interferometry capabilities. However, given its limited use in subsidence monitoring to date, a comprehensive evaluation of LT-1’s interferometric quality and capabilities is necessary. In this [...] Read more.
The LuTan-1 (LT-1) satellite, launched in 2022, is China’s first L-band full-polarimetric Synthetic Aperture Radar (SAR) constellation, boasting interferometry capabilities. However, given its limited use in subsidence monitoring to date, a comprehensive evaluation of LT-1’s interferometric quality and capabilities is necessary. In this study, we utilized the Differential Interferometric Synthetic Aperture Radar (DInSAR) technique to analyze mining-induced subsidence results near Shenmu City (China) with LT-1 data, revealing nine subsidence areas with a maximum subsidence of −19.6 mm within 32 days. Furthermore, a comparative analysis between LT-1 and Sentinel-1 data was conducted focusing on the aspects of subsidence results, interferometric phase, scattering intensity, and interferometric coherence. Notably, LT-1 detected some subsidence areas larger than those identified by Sentinel-1, attributed to LT-1’s high resolution, which significantly enhances the detectability of deformation gradients. Additionally, the coherence of LT-1 data exceeded that of Sentinel-1 due to LT-1’s L-band long wavelength compared to Sentinel-1’s C-band. This higher coherence facilitated more accurate capturing of differential interferometric phases, particularly in areas with large-gradient subsidence. Moreover, the quality of LT-1’s monitoring results surpassed that of Sentinel-1 in root mean square error (RMSE), standard deviation (SD), and signal-to-noise ratio (SNR). In conclusion, these findings provide valuable insights for future subsidence-monitoring tasks utilizing LT-1 data. Ultimately, the systematic differences between LT-1 and Sentinel-1 satellites confirm that LT-1 is well-suited for detailed and accurate subsidence monitoring in complex environments. Full article
(This article belongs to the Special Issue Advances in Remote Sensing for Land Subsidence Monitoring)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Geographical location of the study area; (<b>b</b>) desert grass beach area; (<b>c</b>) open-pit mining area.</p>
Full article ">Figure 2
<p>Technical workflow chart.</p>
Full article ">Figure 3
<p>(<b>a</b>) LT-1 satellite data subsidence monitoring results in the study area; (<b>b</b>–<b>d</b>) enlarged views of typical subsidence areas; (<b>c1</b>,<b>d1</b>) are typical subsidence areas identified by both LT-1 and Sentinel-1 satellite data.</p>
Full article ">Figure 4
<p>(<b>a</b>) Sentinel-1 satellite data subsidence monitoring results in the study area; (<b>b</b>–<b>d</b>) enlarged views of typical subsidence areas.</p>
Full article ">Figure 5
<p>(<b>a</b>,<b>d</b>,<b>g</b>,<b>j</b>) Results from LT-1 satellite data; (<b>b</b>,<b>e</b>,<b>h</b>,<b>k</b>) results from Sentinel-1 satellite data; (<b>c</b>,<b>f</b>,<b>i</b>,<b>l</b>) Google Earth optical images. The dashed circles are areas of subsidence, the AA′ is profile line.</p>
Full article ">Figure 6
<p>Subsidence results along the A-A′ cross-section for LT-1 and Sentinel-1.</p>
Full article ">Figure 7
<p>(<b>d</b>,<b>e</b>) Interferometric phase maps of LT-1 and Sentinel-1 satellite data, respectively; (<b>a</b>–<b>c</b>) enlarged views of LT-1 satellite data, Sentinel-1 satellite data, and Google optical images in the first typical subsidence area; (<b>f</b>–<b>h</b>) the same for the second typical subsidence area. The dashed circles are areas of subsidence.</p>
Full article ">Figure 8
<p>(<b>d</b>,<b>e</b>) Backscatter intensity maps of LT-1 and Sentinel-1 satellite data, respectively; (<b>a</b>–<b>c</b>,<b>f</b>–<b>h</b>) enlarged views of LT-1 satellite data, Sentinel-1 satellite data, and Google optical images.</p>
Full article ">Figure 9
<p>(<b>d</b>,<b>e</b>) Coherence maps of LT-1 and Sentinel-1, respectively; (<b>a</b>–<b>c</b>,<b>f</b>–<b>h</b>) enlarged views of LT-1, Sentinel-1, and Google optical images in the typical subsidence areas A and B, respectively.</p>
Full article ">Figure 10
<p>(<b>a</b>–<b>c</b>) Statistical chart of coherence comparison in the study area, area A, and area B.</p>
Full article ">Figure 11
<p>The track of the LT-1 satellite observing the study area is shown in the left image, while the track of the Sentinel-1 satellite observing the study area is depicted in the right image.</p>
Full article ">Figure 12
<p>MDDG distribution from different SAR satellites under the variations of wavelength and resolution.</p>
Full article ">
17 pages, 6219 KiB  
Article
DGGNets: Deep Gradient-Guidance Networks for Speckle Noise Reduction
by Li Wang, Jinkai Li, Yi-Fei Pu, Hao Yin and Paul Liu
Fractal Fract. 2024, 8(11), 666; https://doi.org/10.3390/fractalfract8110666 - 15 Nov 2024
Viewed by 309
Abstract
Speckle noise is a granular interference that degrades image quality in coherent imaging systems, including underwater sonar, Synthetic Aperture Radar (SAR), and medical ultrasound. This study aims to enhance speckle noise reduction through advanced deep learning techniques. We introduce the Deep Gradient-Guidance Network [...] Read more.
Speckle noise is a granular interference that degrades image quality in coherent imaging systems, including underwater sonar, Synthetic Aperture Radar (SAR), and medical ultrasound. This study aims to enhance speckle noise reduction through advanced deep learning techniques. We introduce the Deep Gradient-Guidance Network (DGGNet), which features an architecture comprising one encoder and two decoders—one dedicated to image recovery and the other to gradient preservation. Our approach integrates a gradient map and fractional-order total variation into the loss function to guide training. The gradient map provides structural guidance for edge preservation and directs the denoising branch to focus on sharp regions, thereby preventing over-smoothing. The fractional-order total variation mitigates detail ambiguity and excessive smoothing, ensuring rich textures and detailed information are retained. Extensive experiments yield an average Peak Signal-to-Noise Ratio (PSNR) of 31.52 dB and a Structural Similarity Index (SSIM) of 0.863 across various benchmark datasets, including McMaster, Kodak24, BSD68, Set12, and Urban100. DGGNet outperforms existing methods, such as RIDNet, which achieved a PSNR of 31.42 dB and an SSIM of 0.853, thereby establishing new benchmarks in speckle noise reduction. Full article
Show Figures

Figure 1

Figure 1
<p>System architecture of a speckle noise reduction system.</p>
Full article ">Figure 2
<p>The network structure of the proposed DGGNet. The DGGNet consists of one encoder and two decoders (one decoder works for the denoising branch, and the other works for the gradient branch). The gradient branch guides the denoising branch by fusing gradient information to enhance structure preservation.</p>
Full article ">Figure 3
<p>The flow diagram of the proposed DGGNet.</p>
Full article ">Figure 4
<p>Denoising visualization of our proposed DGGNet comparing competing methods on the ultrasound dataset. From left to right, we show the clean, noisy, and denoising results of SRAD [<a href="#B23-fractalfract-08-00666" class="html-bibr">23</a>], OBNLM [<a href="#B8-fractalfract-08-00666" class="html-bibr">8</a>], NLLRF [<a href="#B7-fractalfract-08-00666" class="html-bibr">7</a>], MHM [<a href="#B35-fractalfract-08-00666" class="html-bibr">35</a>], DnCNN [<a href="#B16-fractalfract-08-00666" class="html-bibr">16</a>], RIDNet [<a href="#B17-fractalfract-08-00666" class="html-bibr">17</a>], MSANN [<a href="#B20-fractalfract-08-00666" class="html-bibr">20</a>] and our proposed DGGNet.</p>
Full article ">Figure 5
<p>Denoising visualization of our proposed DGGNet comparing competing methods on the ultrasound dataset. From left to right, we show the ground truth, noisy, and denoising results of SRAD [<a href="#B23-fractalfract-08-00666" class="html-bibr">23</a>], OBNLM [<a href="#B8-fractalfract-08-00666" class="html-bibr">8</a>], NLLRF [<a href="#B7-fractalfract-08-00666" class="html-bibr">7</a>], DnCNN [<a href="#B16-fractalfract-08-00666" class="html-bibr">16</a>], MHM [<a href="#B35-fractalfract-08-00666" class="html-bibr">35</a>], RIDNet [<a href="#B17-fractalfract-08-00666" class="html-bibr">17</a>], MSANN [<a href="#B20-fractalfract-08-00666" class="html-bibr">20</a>], and our DGGNet.</p>
Full article ">Figure 6
<p>Denoising visualization of our proposed DGGNet compares competing methods on the realistic experiments data. From left to right, we show the noisy, denoising results of SRAD [<a href="#B23-fractalfract-08-00666" class="html-bibr">23</a>], OBNLM [<a href="#B8-fractalfract-08-00666" class="html-bibr">8</a>], NLLRF [<a href="#B7-fractalfract-08-00666" class="html-bibr">7</a>], MHM [<a href="#B35-fractalfract-08-00666" class="html-bibr">35</a>], DnCNN [<a href="#B16-fractalfract-08-00666" class="html-bibr">16</a>], RIDNet [<a href="#B17-fractalfract-08-00666" class="html-bibr">17</a>], MSANN [<a href="#B20-fractalfract-08-00666" class="html-bibr">20</a>] and our proposed DGGNet.</p>
Full article ">Figure 7
<p>Average feature maps of results of the upsampling block in the decoding architecture of the denoising branch in our proposed DGGNet. The top image in (<b>a</b>) is our denoising result, and the bottom image is the corresponding noisy image. (<b>b</b>–<b>e</b>) are the average feature maps of <math display="inline"><semantics> <mrow> <mn>16</mn> <mo>×</mo> <mn>16</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mn>32</mn> <mo>×</mo> <mn>32</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mn>64</mn> <mo>×</mo> <mn>64</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mn>128</mn> <mo>×</mo> <mn>128</mn> </mrow> </semantics></math> in the denoising branch of the decoding structure. The upper images of those image pairs are the average feature map of the denoising branch with the gradient branch, while the lower images are not. This shows that with the guide of the gradient branch in our DGGNet, the denoising result can preserve structure information better.</p>
Full article ">
23 pages, 16601 KiB  
Article
Adaptive Weighted Coherence Ratio Approach for Industrial Explosion Damage Mapping: Application to the 2015 Tianjin Port Incident
by Zhe Su and Chun Fan
Remote Sens. 2024, 16(22), 4241; https://doi.org/10.3390/rs16224241 - 14 Nov 2024
Viewed by 306
Abstract
The 2015 Tianjin Port chemical explosion highlighted the severe environmental and structural impacts of industrial disasters. This study presents an Adaptive Weighted Coherence Ratio technique, a novel approach for assessing such damage using synthetic aperture radar (SAR) data. Our method overcomes limitations in [...] Read more.
The 2015 Tianjin Port chemical explosion highlighted the severe environmental and structural impacts of industrial disasters. This study presents an Adaptive Weighted Coherence Ratio technique, a novel approach for assessing such damage using synthetic aperture radar (SAR) data. Our method overcomes limitations in traditional techniques by incorporating temporal and spatial weighting factors—such as distance from the explosion epicenter, pre- and post-event intervals, and coherence quality—into a robust framework for precise damage classification. This approach effectively captures extreme damage scenarios, including crater formation in inner blast zones, which are challenging for conventional coherence scaling. Through a detailed analysis of the Tianjin explosion, we reveal asymmetric damage patterns influenced by high-rise buildings and demonstrate the method’s applicability to other industrial disasters, such as the 2020 Beirut explosion. Additionally, we introduce a technique for estimating crater dimensions from coherence profiles, enhancing assessment in severely damaged areas. To support structural analysis, we model air pollutant dispersal using HYSPLIT simulations. This integrated approach advances SAR-based damage assessment techniques, providing rapid reliable classifications applicable to various industrial explosions, aiding disaster response and recovery planning. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Location of Tianjin Port, East China, displayed by the true-RGB-color Sentinel-2 photograph on 18 September 2019 (provided by the ESA). The coverage area of the ascending (Asc.) and descending (Desc.) SAR images employed in this study is depicted on the inset map. (<b>b</b>) One of the limitations of the traditional coherence change detection method—the change from a positive elevation (building) to a negative one (crater)—exceeds the measurement capabilities of traditional coherence change detection techniques.</p>
Full article ">Figure 2
<p>On-site damage images from the explosion as of 12 September 2015, approximately one month after the event (images sourced from Google Earth). Images (<b>a</b>,<b>c</b>) depict the explosion epicenter with a large crater measuring 97 m in diameter and 2.7 m in depth. The toxic liquid, visible as a brownish color in the crater, likely contributed to unexplained decorrelation effects. Image (<b>b</b>) illustrates the complete destruction of buildings and vehicles, while (<b>d</b>) shows containers displaced due to the shockwave. Image (<b>e</b>), taken on 13 August 2015, captures chemically induced fires, facing south. The incident had potential ecological impacts on Bohai Bay [<a href="#B37-remotesensing-16-04241" class="html-bibr">37</a>,<a href="#B38-remotesensing-16-04241" class="html-bibr">38</a>] and air quality in the region [<a href="#B8-remotesensing-16-04241" class="html-bibr">8</a>].</p>
Full article ">Figure 3
<p>Workflow diagram for estimating explosion damage severity and subsequent classification. The methodology follows the framework adapted from [<a href="#B41-remotesensing-16-04241" class="html-bibr">41</a>].</p>
Full article ">Figure 4
<p>Comparison of coherence changes before (<b>a</b>,<b>c</b>) and after (<b>b</b>,<b>d</b>) the Tianjin Port explosion. Descending SAR pairs: (<b>a</b>) 30 July–11 August 2015, and (<b>b</b>) 11 August–23 August 2015. Ascending SAR pairs: (<b>c</b>) 1 July–25 July 2015, and (<b>d</b>) 25 July–18 August 2015. White–black brightness scale indicates coherence amplitude, with darker pixels representing lower coherence. The light blue arrows highlight the areas significantly impacted by the explosion, indicating zones with observed coherence changes. The dotted lines outline these affected regions to visually emphasize the explosion’s spatial extent.</p>
Full article ">Figure 5
<p>Comparison of four ratio-based change approaches for explosive analysis: (<b>a</b>) normalized change ratios, (<b>b</b>) logarithmic change ratios, (<b>c</b>) coherence change ratios, and (<b>d</b>) direct coherence ratio.</p>
Full article ">Figure 6
<p>Pre- and post-explosion coherence change ratio distribution. (<b>a</b>) Descending (track no. 149). (<b>b</b>) Ascending (track no. 69).</p>
Full article ">Figure 7
<p>Timeline showing acquisition dates for (<b>a</b>) D149 and (<b>b</b>) A69, with explosion and pre- and post-event intervals marked.</p>
Full article ">Figure 8
<p>Schematic diagram illustrating integrated damage assessment factors: Distance Normalization Factor (DNF), Coherence Quality Factor (CQF), and Post-Event Temporal Factor (PETF) across pre- and post-event intervals. The diagram also shows the radial damage zones from the explosion, including the inner, outer, and peripheral zones.</p>
Full article ">Figure 9
<p>Weighted ratio classification for the Tianjin Port explosion, combining D149 and A69 data. The profile width we used here is 500 m. Red polygons represent tall buildings. Histogram shows weighted ratio distribution from 0.07 to &gt;1.0.</p>
Full article ">Figure 10
<p>Weighted ratio classification for the Tianjin Port explosion, with profiles from (<b>a</b>) the north to the south and (<b>b</b>) from the southwest to the northeast, showing damage distribution from the epicenter.</p>
Full article ">Figure 11
<p>Pre- and post-explosion coherence change ratio distribution. (<b>a</b>) Descending (track no. 21). (<b>b</b>) Ascending (track no. 87). (<b>c</b>) Ascending (track no. 14). (<b>d</b>) Weighted ratio classification for the Beirut explosion incident, combining D21, A87, and A14 data.</p>
Full article ">Figure 12
<p>Coherence profiles for the Tianjin Port explosion epicenter: (<b>a</b>,<b>b</b>) show west–east coherence profiles before (red) and after (blue) the explosion, revealing significant coherence reductions across the crater region; (<b>c</b>,<b>d</b>) display similar coherence changes along the south–north axis, highlighting the crater’s extent, with coherence values dropping noticeably post-explosion. Combined profiles outline the crater’s approximate dimensions, measuring approximately 90 m west–east and 80 m north–south.</p>
Full article ">Figure 13
<p>Seven-day forward trajectories for nearby regions at different altitudes on 13 August and 19 August 2015. The trajectories are color-coded by release height above ground level (AGL): red for 100 m, blue for 500 m, and green for 1000 m. The HYSPLIT model outputs the heights of the trajectory endpoints in meters above the model terrain level, specifying heights both above ground level and above mean sea level (MSL) [<a href="#B66-remotesensing-16-04241" class="html-bibr">66</a>]. N. Korea: North Korea; S. Korea: South Korea.</p>
Full article ">
17 pages, 2746 KiB  
Article
Deterministic Sea Wave Reconstruction and Prediction Based on Coherent S-Band Radar Using Condition Number Regularized Least Squares
by Zhongqian Hu, Zezong Chen, Chen Zhao and Xi Chen
Remote Sens. 2024, 16(22), 4147; https://doi.org/10.3390/rs16224147 - 7 Nov 2024
Viewed by 333
Abstract
Coherent S-band radar is a remote sensing observation device with high spatial-temporal resolution and can be used to achieve deterministic sea wave reconstruction and prediction (DSWRP) technology. However, coherent S-band radar can observe nonlinear details of the sea surface due to its high [...] Read more.
Coherent S-band radar is a remote sensing observation device with high spatial-temporal resolution and can be used to achieve deterministic sea wave reconstruction and prediction (DSWRP) technology. However, coherent S-band radar can observe nonlinear details of the sea surface due to its high resolution, which makes the propagation operator matrix an ill-conditioned overdetermined matrix. To solve this problem, this paper proposes a DSWRP scheme using condition number regularized least squares (CN-RLS) for coherent S-band radar. First, the space-time velocity information was obtained from the radar echo. Second, the CN-RLS method solved the phase-resolved model coefficients. Finally, the deterministic wave field was predicted according to the solved model coefficients. The proposed scheme was verified by simulation data and the real radar dataset observed by the coherent S-band wave-measuring radar onboard the ship XIANGYANGHONG-18 in the East China Sea in April 2024. The predicted wave elevation of the proposed method was compared with the wave elevation observed based on the X-band wave-measuring radar, and the root mean square error (RMSE) and correlation coefficient (CC) were 0.22 m and 0.76, respectively, which show that the proposed method could effectively implement the DSWRP technology. Full article
Show Figures

Figure 1

Figure 1
<p>Illustration of surface wave velocity measured by radar.</p>
Full article ">Figure 2
<p>The CN-RLS and L-curve RLS methods were used to estimate <math display="inline"><semantics> <mover accent="true"> <mi mathvariant="bold-italic">y</mi> <mo stretchy="false">^</mo> </mover> </semantics></math>.</p>
Full article ">Figure 3
<p>The distribution of CC depended on the <span class="html-italic">K</span> value under different condition numbers (<math display="inline"><semantics> <mrow> <mi>c</mi> <mi>o</mi> <mi>n</mi> <mi>d</mi> <mo>(</mo> <msup> <mi mathvariant="bold-italic">P</mi> <mi>T</mi> </msup> <mi mathvariant="bold">P</mi> <mo>)</mo> </mrow> </semantics></math>).</p>
Full article ">Figure 4
<p>Flow chart of DSWRP.</p>
Full article ">Figure 5
<p>Spatial−temporal velocity series: (<b>a</b>) without broken waves; (<b>b</b>) with broken waves; (<b>c</b>) velocity time series of a single range cell.</p>
Full article ">Figure 6
<p>Simulated and radar-observed wave spectrum.</p>
Full article ">Figure 7
<p>The estimated model coefficients and amplitudes: (<b>a</b>) model coefficient: <math display="inline"><semantics> <msub> <mi>a</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> </semantics></math>; (<b>b</b>) model coefficient: <math display="inline"><semantics> <msub> <mi>b</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> </semantics></math>; (<b>c</b>) amplitude: <math display="inline"><semantics> <msub> <mi>A</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> </semantics></math>.</p>
Full article ">Figure 8
<p>Absolute error of the DSWRP: (<b>a</b>) spatial-temporal absolute error based on the L-curve RLS method; (<b>b</b>) spatial-temporal absolute error based on the proposed method; (<b>c</b>) sea wave surface elevation at the 140th range cell; (<b>d</b>) absolute error at the 140th range cell.</p>
Full article ">Figure 9
<p>(<b>a</b>) The experiment location map (the red five-pointed star indicates the experimental site); (<b>b</b>) the location of the radar installation on XIANGYANGHONG-18 (blue oval).</p>
Full article ">Figure 10
<p>The illuminated region of antenna 4 and wave direction.</p>
Full article ">Figure 11
<p>Echo data from the coherent S-band radar observed by antenna 4 on 8 April 2024 from 20:08 to 20:11: (<b>a</b>) the time–Doppler spectra at the 30th range bin; (<b>b</b>) the space-time radial velocity series; (<b>c</b>) the wavenumber–frequency spectrum of (<b>b</b>).</p>
Full article ">Figure 12
<p>(<b>a</b>) The results of the DSWRP from the velocities in <a href="#remotesensing-16-04147-f011" class="html-fig">Figure 11</a>b after adopting the L-curve RLS method; (<b>b</b>) the results of DSWRP from the velocities in <a href="#remotesensing-16-04147-f011" class="html-fig">Figure 11</a>b after adopting the CN-RLS method.</p>
Full article ">Figure 13
<p>Sea surface elevation predicted by S-band coherent radar and wave elevation observed by X-band radar on 8 April 2024.</p>
Full article ">Figure 14
<p>Scatterplot of the predicted wave elevation (S-band radar) versus the observed wave elevation (X-band).</p>
Full article ">Figure 15
<p>(<b>a</b>) Correlation coefficient plot of DSWRP using the L-curve RLS method and the proposed method under different wind speeds; (<b>b</b>) root mean square error plot of DSWRP using the L-curve RLS method and the proposed method under different wind speeds.</p>
Full article ">
18 pages, 5723 KiB  
Article
Airborne Multi-Channel Forward-Looking Radar Super-Resolution Imaging Using Improved Fast Iterative Interpolated Beamforming Algorithm
by Ke Liu, Yueli Li, Zhou Xu, Zhuojie Zhou and Tian Jin
Remote Sens. 2024, 16(22), 4121; https://doi.org/10.3390/rs16224121 - 5 Nov 2024
Viewed by 515
Abstract
Radar forward-looking imaging is critical in many civil and military fields, such as aircraft landing, autonomous driving, and geological exploration. Although the super-resolution forward-looking imaging algorithm based on spectral estimation has the potential to discriminate multiple targets within the same beam, the estimation [...] Read more.
Radar forward-looking imaging is critical in many civil and military fields, such as aircraft landing, autonomous driving, and geological exploration. Although the super-resolution forward-looking imaging algorithm based on spectral estimation has the potential to discriminate multiple targets within the same beam, the estimation of the angle and magnitude of the targets are not accurate due to the influence of sidelobe leakage. This paper proposes a multi-channel super-resolution forward-looking imaging algorithm based on the improved Fast Iterative Interpolated Beamforming (FIIB) algorithm to solve the problem. First, the number of targets and the coarse estimates of angle and magnitude are obtained from the iterative adaptive approach (IAA). Then, the accurate estimates of angle and magnitude are achieved by the strategy of iterative interpolation and leakage subtraction in FIIB. Finally, a high-resolution forward-looking image is obtained through non-coherent accumulation. The simulation results of point targets and scenes show that the proposed algorithm can distinguish multiple targets in the same beam, effectively improve the azimuthal resolution of forward-looking imaging, and attain the accurate reconstruction of point targets and the contour reconstruction of extended targets. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

Figure 1
<p>Geometry for forward-looking imaging of a scanning radar.</p>
Full article ">Figure 2
<p>Sidelobe spillover effect. (<b>a</b>) Two point targets. (<b>b</b>) One strong point target with one weak point target.</p>
Full article ">Figure 3
<p>An example of the failed FIIB algorithm.</p>
Full article ">Figure 4
<p>Estimated results of different methods for multiple point targets.</p>
Full article ">Figure 5
<p>Comparison of forward-looking imaging performance for point target simulation. (<b>a</b>) Original point targets distribution. (<b>b</b>) Real aperture imaging result. (<b>c</b>) Monopulse imaging result. (<b>d</b>) Monopulse imaging result based on Doppler estimates by using FIIB. (<b>e</b>) Multi-channel imaging result based on FIIB. (<b>f</b>) Multi-channel imaging result based on improved FIIB.</p>
Full article ">Figure 5 Cont.
<p>Comparison of forward-looking imaging performance for point target simulation. (<b>a</b>) Original point targets distribution. (<b>b</b>) Real aperture imaging result. (<b>c</b>) Monopulse imaging result. (<b>d</b>) Monopulse imaging result based on Doppler estimates by using FIIB. (<b>e</b>) Multi-channel imaging result based on FIIB. (<b>f</b>) Multi-channel imaging result based on improved FIIB.</p>
Full article ">Figure 6
<p>The normalized profiles for point targets at the range cell 1670 m.</p>
Full article ">Figure 7
<p>Comparison of forward-looking imaging performance for scene simulation. (<b>a</b>) Original Ku-band SAR map. (<b>b</b>) Real aperture imaging result. (<b>c</b>) Monopulse imaging result. (<b>d</b>) Monopulse imaging result based on Doppler estimates using FIIB. (<b>e</b>) Multi-channel imaging result based on FIIB. (<b>f</b>) Multi-channel imaging result based on the proposed FIIB.</p>
Full article ">Figure 7 Cont.
<p>Comparison of forward-looking imaging performance for scene simulation. (<b>a</b>) Original Ku-band SAR map. (<b>b</b>) Real aperture imaging result. (<b>c</b>) Monopulse imaging result. (<b>d</b>) Monopulse imaging result based on Doppler estimates using FIIB. (<b>e</b>) Multi-channel imaging result based on FIIB. (<b>f</b>) Multi-channel imaging result based on the proposed FIIB.</p>
Full article ">
34 pages, 14046 KiB  
Article
High-Resolution Collaborative Forward-Looking Imaging Using Distributed MIMO Arrays
by Shipei Shen, Xiaoli Niu, Jundong Guo, Zhaohui Zhang and Song Han
Remote Sens. 2024, 16(21), 3991; https://doi.org/10.3390/rs16213991 - 27 Oct 2024
Viewed by 836
Abstract
Airborne radar forward-looking imaging holds significant promise for applications such as autonomous navigation, battlefield reconnaissance, and terrain mapping. However, traditional methods are hindered by complex system design, azimuth ambiguity, and low resolution. This paper introduces a distributed array collaborative, forward-looking imaging approach, where [...] Read more.
Airborne radar forward-looking imaging holds significant promise for applications such as autonomous navigation, battlefield reconnaissance, and terrain mapping. However, traditional methods are hindered by complex system design, azimuth ambiguity, and low resolution. This paper introduces a distributed array collaborative, forward-looking imaging approach, where multiple aircraft with linear arrays fly in parallel to achieve coherent imaging. We analyze signal model characteristics and highlight the limitations of conventional algorithms. To address these issues, we propose a high-resolution imaging algorithm that combines an enhanced missing-data iterative adaptive approach with aperture interpolation technique (MIAA-AIT) for effective signal recovery in distributed arrays. Additionally, a novel reference range cell migration correction (reference RCMC) is employed for precise range–azimuth decoupling. The forward-looking algorithm effectively transforms distributed arrays into a virtual long-aperture array, enabling high-resolution, high signal-to-noise ratio imaging with a single snapshot. Simulations and real data tests demonstrate that our method not only improves resolution but also offers flexible array configurations and robust performance in practical applications. Full article
(This article belongs to the Topic Radar Signal and Data Processing with Applications)
Show Figures

Figure 1

Figure 1
<p>Geometric configuration of the system.</p>
Full article ">Figure 2
<p>Analysis of the single-array configuration. (<b>a</b>) Demonstration of equivalent antenna transformation. (<b>b</b>) Configuration of actual array and equivalent virtual array.</p>
Full article ">Figure 3
<p>Analysis of the mismatch between traditional algorithms and distributed imaging models. (<b>a</b>) Azimuth time-domain envelope of echo sampling in distributed arrays. (<b>b</b>) Azimuth spectrum of echo sampling in distributed arrays. (<b>c</b>) Azimuth focusing results using single-array and distributed multi-array configurations.</p>
Full article ">Figure 4
<p>Analysis of the system’s range cell migration. (<b>a</b>) Single-array RCM. (<b>b</b>) Inter-array RCM.</p>
Full article ">Figure 5
<p>Comparison between the proposed RCMC and traditional RCMC.</p>
Full article ">Figure 6
<p>Coherent processing of azimuth gapped signals.</p>
Full article ">Figure 7
<p>Overall workflow of the distributed array collaborative, forward-looking imaging.</p>
Full article ">Figure 8
<p>Original reference image.</p>
Full article ">Figure 9
<p>Comparative analysis of RCMC algorithms. (<b>a</b>) Target signals in the range—Doppler domain before RCMC. (<b>b</b>) Target signals in the time domain before RCMC. (<b>c</b>) Target signals in the range-Doppler domain after traditional RCMC. (<b>d</b>) Target signals in the time domain after traditional RCMC. (<b>e</b>) Target signals in the range—Doppler domain after proposed RCMC. (<b>f</b>) Target signals in the time domain after proposed RCMC. (<b>g</b>) Imaging results using traditional RCMC. (<b>h</b>) Imaging results using proposed RCMC.</p>
Full article ">Figure 9 Cont.
<p>Comparative analysis of RCMC algorithms. (<b>a</b>) Target signals in the range—Doppler domain before RCMC. (<b>b</b>) Target signals in the time domain before RCMC. (<b>c</b>) Target signals in the range-Doppler domain after traditional RCMC. (<b>d</b>) Target signals in the time domain after traditional RCMC. (<b>e</b>) Target signals in the range—Doppler domain after proposed RCMC. (<b>f</b>) Target signals in the time domain after proposed RCMC. (<b>g</b>) Imaging results using traditional RCMC. (<b>h</b>) Imaging results using proposed RCMC.</p>
Full article ">Figure 10
<p>Forward—looking imaging performance analysis of the proposed distributed array coherent processing algorithm.(<b>a</b>) Original reference image. (<b>b</b>) Target envelope formed by ECS using single array. (<b>c</b>) Target envelope formed by ECS—based full aperture algorithm. (<b>d</b>) Distributed array signals with an inter—array spacing of 10 m and an SNR of 25 dB. (<b>e</b>) Azimuth virtual long—aperture signal formed by the proposed algorithm under the corresponding conditions.(<b>f</b>) Target envelope formed by the proposed algorithm under the corresponding conditions. (<b>g</b>) Distributed array signals with an inter—array spacing of 20 m and an SNR of 25 dB. (<b>h</b>) Azimuth virtual long—aperture signal formed by the proposed algorithm under the corresponding conditions. (<b>i</b>) Target envelope formed by the proposed algorithm under the corresponding conditions. (<b>j</b>) Distributed array signals with an inter—array spacing of 10 m and an SNR of 10 dB. (<b>k</b>) Azimuth virtual long—aperture signal formed by the proposed algorithm under the corresponding conditions. (<b>l</b>) Target envelope formed by the proposed algorithm under the corresponding conditions.</p>
Full article ">Figure 11
<p>Comparative analysis of RCMC algorithms. (<b>a</b>) The target envelope based on LPM—AIT with 10 m array spacing. (<b>b</b>) The target envelope based on GAPES with 10 m array spacing. (<b>c</b>) The target envelope based on OMP with 10 m array spacing. (<b>d</b>) The target envelope based on ISTA with 10 m array spacing. (<b>e</b>) Target envelope from ECS algorithm with a 20 m real aperture. (<b>f</b>) The target envelope based on improved MIAA−AIT with 20 m array spacing. (<b>g</b>) The target envelope based on LPM−AIT with 20 m array spacing. (<b>h</b>) The target envelope based on GAPES with 20 m array spacing. (<b>i</b>) The target envelope based on OMP with 20 m array spacing. (<b>j</b>) The target envelope based on ISTA with 20 m array spacing. (<b>k</b>) Target envelope from ECS algorithm with a 40 m real aperture. (<b>l</b>) The target envelope based on improved MIAA−AIT with 20 m array spacing.</p>
Full article ">Figure 12
<p>Comparison of gapped signal recovery capabilities between different algorithms.</p>
Full article ">Figure 13
<p>Simulation results of surface targets using various algorithms. (<b>a</b>) Original image of surface target. (<b>b</b>) Imaging results of surface targets using 20 m aperture radar based on ECS algorithm. (<b>c</b>) Imaging results of surface targets using single—array radar based on ECS algorithm. (<b>d</b>) Imaging results of surface targets using distributed array based on OMP algorithm. (<b>e</b>) Imaging results of surface targets using distributed array based on LPM—AIT algorithm. (<b>f</b>) Imaging results of surface targets using distributed array based on MIAA—AIT algorithm.</p>
Full article ">Figure 14
<p>Comparison of algorithms with measured data (<b>a</b>) Overall experimental setup photo1. (<b>b</b>) Overall experimental setup photo2. (<b>c</b>) Imaging results with 0.5 m synthetic array. (<b>d</b>) Target azimuth envelope imaging results with 0.5 m synthetic array. (<b>e</b>) Imaging results with single cascade radar. (<b>f</b>) Target azimuth envelope imaging results with single cascade radar. (<b>g</b>) Imaging results using distributed array based on OMP algorithm. (<b>h</b>) Target azimuth envelope imaging results using distributed array based on OMP algorithm. (<b>i</b>) Imaging results using distributed array based on ISTA algorithm. (<b>j</b>) Target azimuth envelope imaging results using distributed array based on ISTA algorithm. (<b>k</b>) Imaging results using distributed array based on LPM—AIT algorithm. (<b>l</b>) Target azimuth envelope imaging results using distributed array based on LPM—AIT algorithm. (<b>m</b>) Imaging results using distributed array based on GAPES algorithm. (<b>n</b>) Target azimuth envelope imaging results using distributed array based on GAPES algorithm. (<b>o</b>) Imaging results using distributed array based on MIAA—AIT algorithm. (<b>p</b>) Target azimuth envelope imaging results using distributed array based on MIAA—AIT algorithm.</p>
Full article ">Figure 14 Cont.
<p>Comparison of algorithms with measured data (<b>a</b>) Overall experimental setup photo1. (<b>b</b>) Overall experimental setup photo2. (<b>c</b>) Imaging results with 0.5 m synthetic array. (<b>d</b>) Target azimuth envelope imaging results with 0.5 m synthetic array. (<b>e</b>) Imaging results with single cascade radar. (<b>f</b>) Target azimuth envelope imaging results with single cascade radar. (<b>g</b>) Imaging results using distributed array based on OMP algorithm. (<b>h</b>) Target azimuth envelope imaging results using distributed array based on OMP algorithm. (<b>i</b>) Imaging results using distributed array based on ISTA algorithm. (<b>j</b>) Target azimuth envelope imaging results using distributed array based on ISTA algorithm. (<b>k</b>) Imaging results using distributed array based on LPM—AIT algorithm. (<b>l</b>) Target azimuth envelope imaging results using distributed array based on LPM—AIT algorithm. (<b>m</b>) Imaging results using distributed array based on GAPES algorithm. (<b>n</b>) Target azimuth envelope imaging results using distributed array based on GAPES algorithm. (<b>o</b>) Imaging results using distributed array based on MIAA—AIT algorithm. (<b>p</b>) Target azimuth envelope imaging results using distributed array based on MIAA—AIT algorithm.</p>
Full article ">Figure 14 Cont.
<p>Comparison of algorithms with measured data (<b>a</b>) Overall experimental setup photo1. (<b>b</b>) Overall experimental setup photo2. (<b>c</b>) Imaging results with 0.5 m synthetic array. (<b>d</b>) Target azimuth envelope imaging results with 0.5 m synthetic array. (<b>e</b>) Imaging results with single cascade radar. (<b>f</b>) Target azimuth envelope imaging results with single cascade radar. (<b>g</b>) Imaging results using distributed array based on OMP algorithm. (<b>h</b>) Target azimuth envelope imaging results using distributed array based on OMP algorithm. (<b>i</b>) Imaging results using distributed array based on ISTA algorithm. (<b>j</b>) Target azimuth envelope imaging results using distributed array based on ISTA algorithm. (<b>k</b>) Imaging results using distributed array based on LPM—AIT algorithm. (<b>l</b>) Target azimuth envelope imaging results using distributed array based on LPM—AIT algorithm. (<b>m</b>) Imaging results using distributed array based on GAPES algorithm. (<b>n</b>) Target azimuth envelope imaging results using distributed array based on GAPES algorithm. (<b>o</b>) Imaging results using distributed array based on MIAA—AIT algorithm. (<b>p</b>) Target azimuth envelope imaging results using distributed array based on MIAA—AIT algorithm.</p>
Full article ">
19 pages, 21263 KiB  
Article
Interferometric Synthetic Aperture Radar Phase Linking with Level 2 Coregistered Single Look Complexes: Enhancing Infrastructure Monitoring Accuracy at Algeciras Port
by Jaime Sánchez-Fernández, Alfredo Fernández-Landa, Álvaro Hernández Cabezudo and Rafael Molina Sánchez
Remote Sens. 2024, 16(21), 3966; https://doi.org/10.3390/rs16213966 - 25 Oct 2024
Viewed by 497
Abstract
This paper presents an advanced workflow for processing radar imagery stacks using Persistent Scatterer and Distributed Scatterer Interferometry (PSDS) to enhance spatial coherence and improve displacement detection accuracy. The workflow leverages Level 2 Coregistered Single Look Complex (L2-CSLC) images generated by the open-source [...] Read more.
This paper presents an advanced workflow for processing radar imagery stacks using Persistent Scatterer and Distributed Scatterer Interferometry (PSDS) to enhance spatial coherence and improve displacement detection accuracy. The workflow leverages Level 2 Coregistered Single Look Complex (L2-CSLC) images generated by the open-source COMPASS (Coregistered Multi-temporal Sar SLC) framework in combination with the Combined eigenvalue maximum likelihood Phase Linking (CPL) approach implemented in MiaplPy. Starting the analysis directly from Level 2 products offers a significant advantage to end-users, as they simplify processing by being pre-geocoded and ready for immediate analysis. Additionally, the open-source nature of the workflow and the use of L2-CSLC products simplify the processing pipeline, making it easier to distribute directly to users for practical applications in monitoring infrastructure stability in dynamic environments. The ISCE3-MiaplPy workflow is compared against ISCE2-MiaplPy and the European Ground Motion Service (EGMS) to assess its performance in detecting infrastructure deformations in dynamic environments, such as the Algeciras port. The results indicate that ISCE3-MiaplPy delivers denser measurements, albeit with increased noise, compared to its counterparts. This higher resolution enables a more detailed understanding of infrastructure stability and surface dynamics, which is critical for environments with ongoing human activity or natural forces. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Three swaths from the interferometric wide swath mode ascending Track 74. Algeciras port is located in the IW1 and burst t074-157011-iw1 (Blue rectangle) was processed (<b>b</b>) AOI processed with PSDS software.</p>
Full article ">Figure 2
<p>Proposed workflow schema.</p>
Full article ">Figure 3
<p>Coregistered SLC timing corrections for the whole burst on 24 March 2020. (<b>a</b>) Slant range geometrical Doppler, (<b>b</b>) azimuth bistatic delay, (<b>c</b>) azimuth FM rate mismatch, (<b>d</b>) slant range solid Earth tides, (<b>e</b>) azimuth time solid Earth tides, (<b>f</b>) line-of-sight ionospheric delay, (<b>g</b>) wet LOS troposphere, (<b>h</b>) dry LOS troposphere.</p>
Full article ">Figure 3 Cont.
<p>Coregistered SLC timing corrections for the whole burst on 24 March 2020. (<b>a</b>) Slant range geometrical Doppler, (<b>b</b>) azimuth bistatic delay, (<b>c</b>) azimuth FM rate mismatch, (<b>d</b>) slant range solid Earth tides, (<b>e</b>) azimuth time solid Earth tides, (<b>f</b>) line-of-sight ionospheric delay, (<b>g</b>) wet LOS troposphere, (<b>h</b>) dry LOS troposphere.</p>
Full article ">Figure 4
<p>InSAR network selection. (<b>a</b>) Mask connected components before (purple) and after (yellow) IFG selection. (<b>b</b>) Number of connected components, (<b>c</b>) number of IFGs not connected per pixel, (<b>d</b>) number of unconnected pixels per IFG, discarted IFGs are shown in yellow, (<b>e</b>) IFG network selected.</p>
Full article ">Figure 5
<p>(<b>a</b>) Temporal coherence, (<b>b</b>) mean amplitude, (<b>c</b>) scatterer type, (<b>d</b>) amplitude dispersion.</p>
Full article ">Figure 6
<p>(<b>a</b>) EGMS velocity for the Algeciras port. (<b>b</b>) Same area processed using CSLCs and phase. (<b>c</b>) Same area processed using ISCE2 and geocoding after phase linking. Reference point used for processing highlighted in white for (<b>b</b>,<b>c</b>).</p>
Full article ">Figure 7
<p>(<b>a</b>) Histograms for the velocity in ISCE3-MiaplPy, ISCE2-MiaplPy, and EGMS over the AOI. (<b>b</b>) Histogram of velocity differences between ISCE3-EGMS. (<b>c</b>) Histogram of velocity differences between ISCE2-ISCE3.</p>
Full article ">Figure 8
<p>(<b>a</b>) Comparison of a group of time series over EVOS Terminal in ISCE3-MiaPLpy and EGMS. (<b>b</b>) Measurement points over the area based on EGMS colored by velocity. (<b>c</b>) Same for ISCE3-Miaplpy.</p>
Full article ">Figure 9
<p>(<b>a</b>) Comparison of a group of time series over Isla Verde Exterior in ISCE3-MiaPLpy and EGMS. (<b>b</b>) Measurement points over the area based on EGMS colored by velocity. (<b>c</b>) Same for ISCE3-Miaplpy.</p>
Full article ">
18 pages, 14524 KiB  
Article
Evaluating the Impact of Interferogram Networks on the Performance of Phase Linking Methods
by Saeed Haji Safari and Yasser Maghsoudi
Remote Sens. 2024, 16(21), 3954; https://doi.org/10.3390/rs16213954 - 23 Oct 2024
Viewed by 659
Abstract
In recent years, phase linking (PL) methods in radar time-series interferometry (TSI) have proven to be powerful tools in geodesy and remote sensing, enabling the precise monitoring of surface displacement and deformation. While these methods are typically designed to operate on a complete [...] Read more.
In recent years, phase linking (PL) methods in radar time-series interferometry (TSI) have proven to be powerful tools in geodesy and remote sensing, enabling the precise monitoring of surface displacement and deformation. While these methods are typically designed to operate on a complete network of interferograms, generating such networks is often challenging in practice. For instance, in non-urban or vegetated regions, decorrelation effects lead to significant noise in long-term interferograms, which can degrade the time-series results if included. Additionally, practical issues such as gaps in satellite data, poor acquisitions, or systematic errors during interferogram generation can result in incomplete networks. Furthermore, pre-existing interferogram networks, such as those provided by systems like COMET-LiCSAR, often prioritize short temporal baselines due to the vast volume of data generated by satellites like Sentinel-1. As a result, complete interferogram networks may not always be available. Given these challenges, it is critical to understand the applicability of PL methods on these incomplete networks. This study evaluated the performance of two PL methods, eigenvalue decomposition (EVD) and eigendecomposition-based maximum-likelihood estimator of interferometric phase (EMI), under various network configurations including short temporal baselines, randomly sparsified networks, and networks where low-coherence interferograms have been removed. Using two sets of simulated data, the impact of different network structures on the accuracy and quality of the results was assessed. These patterns were then applied to real data for further comparison and analysis. The findings demonstrate that while both methods can be effectively used on short temporal baselines, their performance is highly sensitive to network sparsity and the noise introduced by low-coherence interferograms, requiring careful parameter tuning to achieve optimal results across different study areas. Full article
(This article belongs to the Special Issue Analysis of SAR/InSAR Data in Geoscience)
Show Figures

Figure 1

Figure 1
<p>Visualization of different interferogram networks: (<b>a</b>) single-master network, (<b>b</b>) multi-master network, and (<b>c</b>) fully-connected network. The x-axis represents the acquisition date, while the y-axis represents the perpendicular baseline in meters.</p>
Full article ">Figure 2
<p>Visualization of the coherence matrix structures under different interferogram network configurations (with corresponding graphs on the top right of each matrix). Row (<b>a</b>) represents the banded matrix configuration, focusing on short temporal baselines. Row (<b>b</b>) shows the sparse matrix configuration, and row (<b>c</b>) presents the coherence thresholding configuration. White cells indicate the lack of interferograms or removed indices in the coherence and/or SCM matrix.</p>
Full article ">Figure 3
<p>RMSE of the estimated single-master phase series results for both the EMI and EVD methods. Figures (<b>a</b>–<b>c</b>) correspond to Case A of the simulation, while figures (<b>d</b>–<b>f</b>) represent the results of Case B. Each figure displays two columns: the left column shows the results for EMI, and the right column shows the results for EVD.</p>
Full article ">Figure 4
<p>Comparison of RMSE for EMI and EVD under different coherence matrix configurations. (<b>a</b>) RMSE of EMI using a sparsed estimated coherence matrix from the banded sparse SCM for inversion. (<b>b</b>) RMSE of EMI using an estimated coherence matrix from the banded SCM without sparsity. (<b>c</b>) RMSE of EMI using the true but banded coherence matrix from the modeling step. (<b>d</b>) RMSE of EVD.</p>
Full article ">Figure 5
<p>(<b>a</b>) Plot of the estimated single-master phase series values displaying five different series. The true phase series (blue), which remained nearly zero except for small values in the short-term indices simulating phase bias, was compared against four other series: resulting from bw-3, 5, 10, and the fully-connected network. As the bandwidth increased and more long-term interferograms were included, the estimated phase series results moved progressively closer to the true phase series. (<b>b</b>) The rate of displacement calculated from all five phase series, showing how even a small phase bias in the short-term interferograms resulted in a significant overestimation of the rate of displacement.</p>
Full article ">Figure 6
<p>Study area overview. (<b>a</b>) The study site is located in southwest Iran, west of the city of Ahvaz, with the boundaries of the area marked by the white polygon. The background imagery was sourced from USGS Landsat 8 Level 2, Collection 2, Tier 1 (LANDSAT/LC08/C02/T1_L2). (<b>b</b>) Land cover classification map of the study area, derived from Google Earth Engine using The European Space Agency (ESA) WorldCover 10 m 2020 land cover map product [<a href="#B30-remotesensing-16-03954" class="html-bibr">30</a>].</p>
Full article ">Figure 7
<p>(<b>a</b>) Series of rate of displacement maps (in mm/year) for both the EMI and EVD results. The first row shows the rate of displacement for EMI, while the second row presents the results for EVD. Each column corresponds to a different banded network configuration: bandwidths of 3, 5, 15, 30, 45, and the fully-connected network, as indicated in the column titles. (<b>b</b>–<b>d</b>) The mean rate of displacement for EMI, calculated from the line of sight (LOS) cumulative displacements of 100 adjacent pixels for built-up areas, cropland, and bare land, respectively. (<b>e</b>–<b>g</b>) The same mean rate of displacement, calculated from cumulative displacements, but for the EVD results.</p>
Full article ">Figure 8
<p>Histograms showing the differences between the rate of displacement for banded configurations (bw-5, 10, 15) and their respective sparsified networks at varying percentages for the EMI method. (<b>a</b>–<b>c</b>) Represent the differences between bw-5 and its sparsified versions across different land covers: built-up (<b>a</b>), cropland (<b>b</b>), and bare land (<b>c</b>). (<b>d</b>–<b>f</b>) Show the same structure for bw-10, and (<b>g</b>–<b>i</b>) for bw-15. Each plot includes the mean and standard deviation values of the histograms to highlight the impact of increasing sparsity on the rate of displacement estimation for each land cover type.</p>
Full article ">Figure 9
<p>Histograms showing the differences between the rate of displacement for banded configurations (bw-5, 10, 15) and their respective sparsified networks at varying percentages for the EVD method. (<b>a</b>–<b>c</b>) Represent the differences between bw-5 and its sparsified versions across different land covers: built-up (<b>a</b>), cropland (<b>b</b>), and bare land (<b>c</b>). (<b>d</b>–<b>f</b>) Show the same structure for bw-10, and (<b>g</b>–<b>i</b>) for bw-15. Each plot includes the mean and standard deviation values of the histograms to highlight the impact of increasing sparsity on the displacement accuracy for each land cover type.</p>
Full article ">Figure 10
<p>(<b>a</b>–<b>c</b>) Reconstructed interferograms of one of the original 6-day interferograms using the linked phase results from EMI, and (<b>d</b>–<b>f</b>) the corresponding velocity maps derived from the different interferogram networks. (<b>a</b>,<b>d</b>) Represent the results from the fully-connected network, while (<b>b</b>,<b>e</b>) are the results by applying a coherence threshold of 0.4, and (<b>c</b>,<b>f</b>) used a threshold of 0.5.</p>
Full article ">Figure 11
<p>(<b>a</b>–<b>c</b>) Reconstructed interferograms of one of the original 6-day interferograms using the linked phase results from EVD, and (<b>d</b>–<b>f</b>) the corresponding velocity maps derived from the different interferogram networks. (<b>a</b>,<b>d</b>) represent the results from the fully-connected network, while (<b>b</b>,<b>e</b>) are the results by applying a coherence threshold of 0.4, and (<b>c</b>,<b>f</b>) used a threshold of 0.5.</p>
Full article ">
17 pages, 10820 KiB  
Article
Multiple-Input Multiple-Output Microwave Tomographic Imaging for Distributed Photonic Radar Network
by Carlo Noviello, Salvatore Maresca, Gianluca Gennarelli, Antonio Malacarne, Filippo Scotti, Paolo Ghelfi, Francesco Soldovieri, Ilaria Catapano and Rosa Scapaticci
Remote Sens. 2024, 16(21), 3940; https://doi.org/10.3390/rs16213940 - 23 Oct 2024
Viewed by 562
Abstract
This paper deals with the imaging problem from data collected by means of a microwave photonics-based distributed radar network. The radar network is leveraged on a centralized architecture, which is composed of one central unit (CU) and two transmitting and receiving dual-band remote [...] Read more.
This paper deals with the imaging problem from data collected by means of a microwave photonics-based distributed radar network. The radar network is leveraged on a centralized architecture, which is composed of one central unit (CU) and two transmitting and receiving dual-band remote radar peripherals (RPs), it is capable of collecting monostatic and multistatic phase-coherent data. The imaging is herein formulated as a linear inverse scattering problem and solved in a regularized way through the truncated singular value decomposition inversion scheme. Specifically, two different imaging schemes based on an incoherent fusion of the tomographic images or a fully coherent data processing are herein developed and compared. Experimental tests carried out in a port scenario for imaging both a stationary and a moving target are reported to validate the imaging approach. Full article
(This article belongs to the Special Issue State-of-the-Art and Future Developments: Short-Range Radar)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Radar network configuration at Livorno harbor, Italy. RP1 and RP2 indicate the locations of the active radar peripherals. The green and yellow triangles represent, approximately, the antenna viewing angles.</p>
Full article ">Figure 2
<p>Block diagram of the signal processing pipeline.</p>
Full article ">Figure 3
<p>A range–Doppler map related to an experimental test. The white and red ellipses show the contributions from static and moving targets, respectively.</p>
Full article ">Figure 4
<p>Geometry of the radar imaging problem.</p>
Full article ">Figure 5
<p>RD maps (normalized amplitude in dB) for each radar channel and Acquisition 1: RP1-RP1 (<b>a</b>), RP1-RP2 (<b>b</b>), RP2-RP1 (<b>c</b>), RP2-RP2 (<b>d</b>).</p>
Full article ">Figure 6
<p>Pictures of the target under test: picture of the lighthouse representing the main scattering static object in the observed scene (<b>a</b>); picture of the Cruise Sardegna ferry and its daily route in the harbor represented by the white line (source: Google Earth) (<b>b</b>).</p>
Full article ">Figure 7
<p>Tomographic reconstructions of the static target for Acquisition 1 in a local coordinate system (<b>a</b>,<b>b</b>) against a heat colormap scale with a [−3, 0] dB range. Reconstructions in a geographic coordinate system against a jet colormap scale with a [−3, 0] dB range (<b>c</b>,<b>d</b>). MIMO-MWT imaging results achieved through the MIMO-MWT-incoherent approach (left panel (<b>a</b>,<b>c</b>)) and MIMO-MWT-coherent approach (right panel (<b>b</b>,<b>d</b>)). The green and yellow dots represent the locations of the radar nodes RP1 and RP2, while the red circle denotes the location of the lighthouse.</p>
Full article ">Figure 8
<p>Zoomed-in sections of the RD map (normalized amplitude in dB) for each radar channel and Acquisition 1: RP1-RP1 (<b>a</b>), RP1-RP2 (<b>b</b>), RP2-RP1 (<b>c</b>), RP2-RP2 (<b>d</b>).</p>
Full article ">Figure 9
<p>Imaging reconstruction by MIMO-MWT shown in a local coordinate system against a heat colormap scale with a [−3, 0] db range: channel TX1-RX1 (<b>a</b>); channel TX1-RX2 (<b>b</b>); channel TX2-RX1 (<b>c</b>); channel TX2-RX2 (<b>d</b>); MIMO incoherent imaging (<b>e</b>); MIMO coherent imaging (<b>f</b>).</p>
Full article ">Figure 10
<p>Georeferenced tomographic imaging reconstructions of the moving target for Acquisition frames 1–8 (<b>a</b>–<b>h</b>). These images were obtained using the MIMO-MWT-incoherent approach.</p>
Full article ">Figure 11
<p>Georeferenced tomographic imaging reconstructions of the moving target for Acquisition frames 1–8 (<b>a</b>–<b>h</b>). These images were obtained using the MIMO-MWT coherent approach.</p>
Full article ">
18 pages, 4741 KiB  
Article
Estimation of Glacier Outline and Volume Changes in the Vilcanota Range Snow-Capped Mountains, Peru, Using Temporal Series of Landsat and a Combination of Satellite Radar and Aerial LIDAR Images
by Nilton Montoya-Jara, Hildo Loayza, Raymundo Oscar Gutiérrez-Rosales, Marcelo Bueno and Roberto Quiroz
Remote Sens. 2024, 16(20), 3901; https://doi.org/10.3390/rs16203901 - 20 Oct 2024
Viewed by 687
Abstract
The Vilcanota is the second-largest snow-capped mountain range in Peru, featuring 380 individual glaciers, each with its own unique characteristics that must be studied independently. However, few studies have been conducted in the Vilcanota range to monitor and track the area and volume [...] Read more.
The Vilcanota is the second-largest snow-capped mountain range in Peru, featuring 380 individual glaciers, each with its own unique characteristics that must be studied independently. However, few studies have been conducted in the Vilcanota range to monitor and track the area and volume changes of the Suyuparina and Quisoquipina glaciers. Notably, there are only a few studies that have approached this issue using LIDAR technology. Our methodology is based on a combination of optical, radar and LIDAR data sources, which allowed for constructing coherent temporal series for the both the perimeter and volume changes of the Suyuparina and Quisoquipina glaciers while accounting for the uncertainty in the perimeter detection procedure. Our results indicated that, from 1990 to 2013, there was a reduction in snow cover of 12,694.35 m2 per year for Quisoquipina and 16,599.2 m2 per year for Suyuparina. This represents a loss of 12.18% for Quisoquipina and 22.45% for Suyuparina. From 2006 to 2013, the volume of the Quisoquipina glacier decreased from 11.73 km3 in 2006 to 11.04 km3 in 2010, while the Suyuparina glacier decreased from 6.26 km3 to 5.93 km3. Likewise, when analyzing the correlation between glacier area and precipitation, a moderate inverse correlation (R = −0.52, p < 0.05) was found for Quisoquipina. In contrast, the correlation for Suyuparina was low and nonsignificant, showing inconsistency in the effect of precipitation. Additionally, the correlation between the snow cover area and the annual mean air temperature (R = −0.34, p > 0.05) and annual minimum air temperature (R = −0.36, p > 0.05) was low, inverse, and not significant for Quisoquipina. Meanwhile, snow cover on Suyuparina had a low nonsignificant correlation (R = −0.31, p > 0.05) with the annual maximum air temperature, indicating a minimal influence of the measured climatic variables near this glacier on its retreat. In general, it was possible to establish a reduction in both the area and volume of the Suyuparina and Quisoquipina glaciers based on freely accessible remote sensing data. Full article
(This article belongs to the Section Remote Sensing and Geo-Spatial Science)
Show Figures

Figure 1

Figure 1
<p>An Airborne LIDAR point cloud of 3.2 m spatial resolution was acquired on the Suyuparina and Quisoquipina glaciers in the province of Canchis, Cusco.</p>
Full article ">Figure 2
<p>Binarized NDSI images recovered from Landsat 5 images from May 1990 (<b>A</b>) and Landsat 7 from April 2013 (<b>B</b>) for the Suyuparina and Quisoquipina glaciers. In orange and red, the shapefiles of the Suyuparina and Quisoquipina glaciers are delimited by expert criteria.</p>
Full article ">Figure 3
<p>Processing scheme. Blue represents inputs, green represents processing, yellow represents intermedium processing, and purple represents outputs.</p>
Full article ">Figure 4
<p>(<b>A</b>) Glacierized area of Quisoquipina, and (<b>B</b>) Suyuparina glaciers. In gray is the uncertainty band of the estimated glacier area.</p>
Full article ">Figure 5
<p>(<b>Up</b>) Glaciated area of Quisoquipina, and Suyuparina glaciers (<b>Down</b>) analyzed from 1990 to 1999 and from 2000 to 2013.</p>
Full article ">Figure 6
<p>(<b>A</b>) Volume changes of the Quisoquipina and (<b>B</b>) Suyuparina glaciers. Confidence intervales to the linear fitted model, shown in gray.</p>
Full article ">Figure 7
<p>(<b>A</b>) Glacierized outlines of Suyuparina and Quisoquipina glaciers. (<b>B</b>) Elevation change based on ALOS and LIDAR DEM analysis. Snow glaciological stakes installed between 2014 to 2016 are shown as reference. The background image corresponds to Google Earth 2019.</p>
Full article ">Figure 8
<p>Scatterplots of climatic and glacier surface changes. MeanAPE: Mean annual potential evapotranspiration (mm/day), MeanAP: Annual mean precipitation (mm/day), MaxAAT: Max. annual air temperature (°C), MinAAT: Min. annual air temperature (°C), MeanAMAT: Mean annual mean air temperature (°C).</p>
Full article ">
16 pages, 25832 KiB  
Article
Identifying Potential Landslides in Low-Coherence Areas Using SBAS-InSAR: A Case Study of Ninghai County, China
by Jin Xu, Shijie Ge, Chunji Zhuang, Xixuan Bai, Jianfeng Gu and Bingqiang Zhang
Geosciences 2024, 14(10), 278; https://doi.org/10.3390/geosciences14100278 - 19 Oct 2024
Viewed by 686
Abstract
The southeastern coastal regions of China are characterized by typical hilly terrain with abundant rainfall throughout the year, leading to frequent geological hazards. To investigate the measurement accuracy of surface deformation and the effectiveness of error correction methods using the small baselines subset–interferometry [...] Read more.
The southeastern coastal regions of China are characterized by typical hilly terrain with abundant rainfall throughout the year, leading to frequent geological hazards. To investigate the measurement accuracy of surface deformation and the effectiveness of error correction methods using the small baselines subset–interferometry synthetic aperture radar (SBAS-InSAR) method in identifying potential geological hazards in such areas, this study processes and analyzes 129 SAR images covering Ninghai County, China. By processing coherence coefficients using the Stacking technique, errors introduced by low-coherence images during phase unwrapping are mitigated. Subsequently, interferograms with high coherence are selected for time-series deformation analysis based on the statistical parameters of coherence coefficients. The results indicate that, after mitigating errors from low-coherence images, applying the SBAS-InSAR method to only high-coherence SAR datasets provides reliable surface deformation results. Additionally, when combined with field geological survey data, this method successfully identified landslide boundaries and potential landslides not accurately detected in previous geological surveys. This study demonstrates that using the SBAS-InSAR method and selecting high-coherence SAR images based on interferogram coherence statistical parameters significantly improves measurement accuracy and effectively identifies potential geological hazards. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) The red rectangle in the figure indicates the area of Ninghai County; (<b>b</b>) the topography of Ninghai County is shown, with black stars marking the locations of Sangzhou Town, including the Nanshanzhang and Liufeng landslides.</p>
Full article ">Figure 2
<p>The temporal and spatial baseline distribution plot for the first set of SAR images. The horizontal axis represents the acquisition time of the radar images, with the numbers after the decimal point indicating a decimal division of the year for greater precision.</p>
Full article ">Figure 3
<p>Stacking coherence coefficient and standard deviation after weighted averaging. (<b>a</b>) Coherence coefficient map for SAR image interferograms of Ninghai County and surrounding areas; (<b>b</b>) coherence coefficient map for regions including the Nanshanzhang and Liufeng landslides in Nanling Village; (<b>c</b>,<b>d</b>) standard deviations of the weighted averages for the corresponding areas. The pentagrams in the figure indicate the positions of sliding masses identified through field geological surveys. In panel (<b>d</b>), the locations marked A and B represent the core deformation areas identified through field geological surveys and InSAR observations and were thus designated as feature points for subsequent deformation extraction.</p>
Full article ">Figure 4
<p>Partial interferograms and the corresponding root-mean-square (RMS) of coherence coefficients.</p>
Full article ">Figure 5
<p>Statistical parameters of interferometric coherence in SAR images.</p>
Full article ">Figure 6
<p>Cumulative deformation of characteristic points in the study area. (<b>a</b>) Time-series deformation at feature points for the first dataset; (<b>b</b>) time-series deformation at feature points for the first dataset after removing interferograms with high noise, based on a coherence RMS threshold; (<b>c</b>) time-series deformation at feature points for the second dataset; (<b>d</b>) time-series deformation at feature points for the third dataset.</p>
Full article ">Figure 6 Cont.
<p>Cumulative deformation of characteristic points in the study area. (<b>a</b>) Time-series deformation at feature points for the first dataset; (<b>b</b>) time-series deformation at feature points for the first dataset after removing interferograms with high noise, based on a coherence RMS threshold; (<b>c</b>) time-series deformation at feature points for the second dataset; (<b>d</b>) time-series deformation at feature points for the third dataset.</p>
Full article ">Figure 7
<p>Temporal and spatial baseline distribution plot for the second set of SAR images. The horizontal axis represents the acquisition time of the radar images, with the numbers after the decimal point indicating a decimal division of the year for greater precision.</p>
Full article ">Figure 8
<p>Statistical parameters of interferometric coherence corresponding to the second set of SAR images.</p>
Full article ">Figure 9
<p>Temporal and spatial baseline distribution plot for the third set of SAR images. The horizontal axis represents the acquisition time of the radar images, with the numbers after the decimal point indicating a decimal division of the year for greater precision.</p>
Full article ">Figure 10
<p>Statistical parameters of interferometric coherence corresponding to the third set of SAR images.</p>
Full article ">Figure 11
<p>Rate of LOS deformation to the surface in the study area. (<b>a</b>) Surface deformation rate in the radar line of sight (LOS) in Ninghai County; (<b>b</b>) Surface deformation rate in the radar line of sight (LOS) in the research area of Sangzhou Town.</p>
Full article ">Figure 12
<p>Cumulative surface deformation along the line of sight (LOS) in the study area. (<b>a</b>) Cumulative surface deformation in the LOS in Ninghai County; (<b>b</b>) Cumulative surface deformation in the LOS in the research area of Sangzhou Town.</p>
Full article ">Figure 13
<p>Deformation and geomorphological characteristics of Nanzhang landslide and Liufeng landslide. (<b>a</b>) On-site topography from the field geological survey of the potential landslide at Nanshanzhang (mirror NW); (<b>b</b>) On-site topography from the field geological survey of the potential landslide at Nanshanzhang (mirror N); (<b>c</b>) Optical image of the study area; (<b>d</b>) Cumulative surface deformation of the study area; (<b>e</b>) On-site topography from the field geological survey of the potential landslide at Liufeng (mirror E); (<b>f</b>) On-site topography from the field geological survey of the potential landslide at Liufeng (mirror N).</p>
Full article ">
34 pages, 8862 KiB  
Article
A Novel Detection Transformer Framework for Ship Detection in Synthetic Aperture Radar Imagery Using Advanced Feature Fusion and Polarimetric Techniques
by Mahmoud Ahmed, Naser El-Sheimy and Henry Leung
Remote Sens. 2024, 16(20), 3877; https://doi.org/10.3390/rs16203877 - 18 Oct 2024
Viewed by 840
Abstract
Ship detection in synthetic aperture radar (SAR) imagery faces significant challenges due to the limitations of traditional methods, such as convolutional neural network (CNN) and anchor-based matching approaches, which struggle with accurately detecting smaller targets as well as adapting to varying environmental conditions. [...] Read more.
Ship detection in synthetic aperture radar (SAR) imagery faces significant challenges due to the limitations of traditional methods, such as convolutional neural network (CNN) and anchor-based matching approaches, which struggle with accurately detecting smaller targets as well as adapting to varying environmental conditions. These methods, relying on either intensity values or single-target characteristics, often fail to enhance the signal-to-clutter ratio (SCR) and are prone to false detections due to environmental factors. To address these issues, a novel framework is introduced that leverages the detection transformer (DETR) model along with advanced feature fusion techniques to enhance ship detection. This feature enhancement DETR (FEDETR) module manages clutter and improves feature extraction through preprocessing techniques such as filtering, denoising, and applying maximum and median pooling with various kernel sizes. Furthermore, it combines metrics like the line spread function (LSF), peak signal-to-noise ratio (PSNR), and F1 score to predict optimal pooling configurations and thus enhance edge sharpness, image fidelity, and detection accuracy. Complementing this, the weighted feature fusion (WFF) module integrates polarimetric SAR (PolSAR) methods such as Pauli decomposition, coherence matrix analysis, and feature volume and helix scattering (Fvh) components decomposition, along with FEDETR attention maps, to provide detailed radar scattering insights that enhance ship response characterization. Finally, by integrating wave polarization properties, the ability to distinguish and characterize targets is augmented, thereby improving SCR and facilitating the detection of weakly scattered targets in SAR imagery. Overall, this new framework significantly boosts DETR’s performance, offering a robust solution for maritime surveillance and security. Full article
(This article belongs to the Special Issue Target Detection with Fully-Polarized Radar)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Flowchart of the proposed ship detection in SAR imagery.</p>
Full article ">Figure 2
<p>CNN preprocessing model.</p>
Full article ">Figure 3
<p>DETR pipeline overview [<a href="#B52-remotesensing-16-03877" class="html-bibr">52</a>].</p>
Full article ">Figure 4
<p>Performance of FEDETR for two images from the test datasets SSDD and SAR Ship, including Gaofen-3 (<b>a1</b>–<b>a8</b>) and Sentinel-1 images (<b>b1</b>–<b>b8</b>) with different polarizations and resolutions. The ground truths, detection results, the false detection and missed detection results are indicated with green, red, yellow, and blue boxes, respectively.</p>
Full article ">Figure 5
<p>Experimental results for ship detection in SAR images across four distinct regions: Onshore1, Onshore2, Offshore1, and Offshore2. (<b>a</b>) are the ground truth images; (<b>b</b>–<b>e</b>) are the detection results for DETR using VV and VH (DETR_VV, DETR_VH) as well as FEDETR using VV and VH (FEDETR_VV, FEDETR_VH) polarizations, respectively. Ground truths, detection results, false detection results, and missed detection results are marked with green, red, yellow, and blue boxes.</p>
Full article ">Figure 6
<p>Experimental results for ship detection in SAR images across four regions: Onshore1, Onshore2, Offshore1, and Offshore2. (<b>a</b>) are the ground truth images and (<b>b</b>,<b>c</b>) are the predicted results from FEDETR with optimal pooling and kernel size and the WFF method, respectively. Ground truths, detection results, false detections, and missed detections are marked with green, red, yellow, and blue boxes, respectively.</p>
Full article ">Figure 7
<p>Correlation matrix analyzing the relationship between kernel Size, LSF, and PSNR for max pooling (<b>a</b>) and median pooling (<b>b</b>) on SSD and SAR Ship datasets. Validation of FEDETR module effectiveness.</p>
Full article ">Figure 8
<p>Depicts the LSF of images with different types of pooling and kernel sizes. Panels (<b>a1</b>–<b>a4</b>) depict LSF images after max pooling, while panels (<b>a5</b>–<b>a8</b>) show LSF images after median pooling with kernel sizes 3, 5, 7, and 9 respectively for Gaofen-3 HH images from the SAR Ship dataset. Panels (<b>b1</b>–<b>b4</b>) illustrate LSF images after max pooling and panels (<b>b5</b>–<b>b8</b>) show LSF images after median pooling for images from the SSD dataset.</p>
Full article ">Figure 9
<p>Backscattering intensity in VV and VH polarizations and ship presence across four regions. (<b>a1</b>,<b>a2</b>) Backscattering intensity in VV and VH polarizations for Onshore1; (<b>a3</b>,<b>a4</b>) backscattering intensity for ships in Onshore1; (<b>b1</b>,<b>b2</b>) backscattering intensity in VV and VH polarizations for Onshore2; (<b>b3</b>,<b>b4</b>) backscattering intensity for ships in Onshore2; (<b>c1</b>,<b>c2</b>) backscattering intensity in VV and VH polarizations for Offshore1; (<b>c3</b>,<b>c4</b>) backscattering intensity for ships in Offshore1; (<b>d1</b>,<b>d2</b>) backscattering intensity in VV and VH polarizations for Offshore2; and (<b>d3</b>,<b>d4</b>) backscattering intensity for ships in Offshore2. In each subfigure, the x-axis represents pixel intensity, and the y-axis represents frequency.</p>
Full article ">Figure 10
<p>LSF and PSNR Comparisons for Onshore and Offshore Areas (Onshore1 (<b>a</b>,<b>b</b>), Onshore2 (<b>c</b>,<b>d</b>), Offshore1 (<b>e</b>,<b>f</b>), Offshore2 (<b>g</b>,<b>h</b>)) Using VV and VH Polarization with Median and Max Pooling.</p>
Full article ">Figure 11
<p>Visual comparison of max and median pooling with different kernel sizes on onshore and offshore SAR imagery for VV and VH polarizations: (<b>a1</b>,<b>a2</b>) Onshore1 VV (max kernel size 3; median kernel size 3); (<b>a3</b>,<b>a4</b>) Onshore1 VV (median kernel size 5); (<b>b1</b>,<b>b2</b>) Onshore2 VV (max kernel size 3); (<b>b3</b>,<b>b4</b>) Onshore2 VH (median kernel size 5); (<b>c1</b>,<b>c2</b>) Offshore1 VV (max kernel size 7; median kernel size 7); (<b>c3</b>,<b>c4</b>) Offshore1 VH (max kernel size 3; median kernel size 3); (<b>d1</b>,<b>d2</b>) Offshore2 VV (max kernel size 5; median kernel size 5); (<b>d3</b>,<b>d4</b>) Offshore2 VH (max kernel size 5; median kernel size 5).</p>
Full article ">Figure 12
<p>Experimental results for ship detection in SAR images across four regions: (<b>a</b>) Onshore1, (<b>b</b>) Onshore2, (<b>c</b>) Offshore1, and (<b>d</b>) Offshore2. The figure illustrates the effectiveness of the Pauli decomposition method in reducing noise and distinguishing ships from the background. Ships are marked in pink, while noise clutter is shown in green.</p>
Full article ">Figure 13
<p>Signal-to-clutter ratio (SCR) comparisons for different polarizations across various scenarios. VV polarization is in blue, VH polarization in orange, and Fvh in green.</p>
Full article ">Figure 14
<p>Otsu’s thresholding on four regions for Pauli and FVH images: (<b>a1</b>–<b>a4</b>) thresholding for Onshore1, Onshore2, Offshore1, and Offshore2 for Pauli images; (<b>b1</b>–<b>b4</b>) thresholding for the same regions for Fvh images.</p>
Full article ">Figure 15
<p>Visualization of FEDETR attention maps, Pauli decomposition, Fvh feature maps, and WFF results for Onshore1 (<b>a1</b>–<b>a4</b>), Onshore2 (<b>b1</b>–<b>b4</b>), Offshore1 (<b>c1</b>–<b>c4</b>), and Offshore2 (<b>d1</b>–<b>d4</b>).</p>
Full article ">
32 pages, 15160 KiB  
Article
Analyzing Temporal Characteristics of Winter Catch Crops Using Sentinel-1 Time Series
by Shanmugapriya Selvaraj, Damian Bargiel, Abdelaziz Htitiou and Heike Gerighausen
Remote Sens. 2024, 16(19), 3737; https://doi.org/10.3390/rs16193737 - 8 Oct 2024
Viewed by 710
Abstract
Catch crops are intermediate crops sown between two main crop cycles. Their adoption into the cropping system has increased considerably in the last years due to its numerous benefits, in particular its potential in carbon fixation and preventing nitrogen leaching during winter. The [...] Read more.
Catch crops are intermediate crops sown between two main crop cycles. Their adoption into the cropping system has increased considerably in the last years due to its numerous benefits, in particular its potential in carbon fixation and preventing nitrogen leaching during winter. The growth period of catch crops in Germany is often marked by dense cloud cover, which limits land surface monitoring through optical remote sensing. In such conditions, synthetic aperture radar (SAR) emerges as a viable option. Despite the known advantages of SAR, the understanding of temporal behavior of radar parameters in relation to catch crops remains largely unexplored. Hence, in this study, we exploited the dense time series of Sentinel-1 data within the Copernicus Space Component to study the temporal characteristics of catch crops over a test site in the center of Germany. Radar parameters such as VV, VH, VH/VV backscatter, dpRVI (dual-pol Radar Vegetation Index) and VV coherence were extracted, and temporal profiles were interpreted for catch crops and preceding main crops along with in situ, temperature, and precipitation data. Additionally, we examined the temporal profiles of winter main crops (winter oilseed rape and winter cereals), that are grown parallel to the catch crop growing cycle. Based on the analyzed temporal patterns, we defined 22 descriptive features from VV, VH, VH/VV and dpRVI, which are specific to catch crop identification. Then, we conducted a Kruskal–Wallis test on the extracted parameters, both crop-wise and group-wise, to assess the significance of statistical differences among different catch crop groups. Our results reveal that there exists a unique temporal pattern for catch crops compared to main crops, and each of these extracted parameters possess a different sensitivity to catch crops. Parameters VV and VH are sensitive to phenological stages and crop structure. On the other hand, VH/VV and dpRVI were found to be highly sensitive to crop biomass. Coherence can be used to detect the sowing and harvest events. The preceding main crop analysis reveals that winter wheat and winter barley are the two dominant main crops grown before catch crops. Moreover, winter main crops (winter oilseed rape, winter cereals) cultivated during the catch crop cycle can be distinguished by exploiting the observed sowing window differences. The extracted descriptive features provide information about sowing, harvest, vigor, biomass, and early/late die-off nature specific to catch crop types. In the Kruskal–Wallis test, the observed high H-statistic and low p-value in several predictors indicates significant variability at 0.001 level. Furthermore, Dunn’s post hoc test among catch crop group pairs highlights the substantial differences between cold-sensitive and legume groups (p < 0.001). Full article
Show Figures

Figure 1

Figure 1
<p>Location of study area and sample points collected on five different dates during the year 2021 and 2022. The extents of Sentinel-1 relative orbits tiles 177 and 44 are shown in the location map.</p>
Full article ">Figure 2
<p>Some of the catch crop fields encountered during field survey: (<b>a</b>) mustard, (<b>b</b>) oilseed radish, (<b>c</b>) phacelia, (<b>d</b>) clover, (<b>e</b>) niger, and (<b>f</b>) green mixture.</p>
Full article ">Figure 3
<p>Phenological stages of different crops in the study area and corresponding Sentinel-1 data acquisitions in the years 2021 (blue) and 2022 (red).</p>
Full article ">Figure 4
<p>Example of a real mustard catch crop field where the entire profile of (<b>a</b>) VV backscatter, (<b>b</b>) VH backscatter, (<b>c</b>) VH/VV backscatter, and (<b>d</b>) dpRVI is divided into three phases based on detected peak and minimum values.</p>
Full article ">Figure 5
<p>Temporal VV backscatter profile of main crops followed by different catch crop categories: (<b>a</b>) cold-tolerant, (<b>b</b>) cold-sensitive, (<b>c</b>) legumes for the years 2021 (<b>left</b>) and 2022 (<b>right</b>). The vertical dashed lines (black color) indicate the harvest of the main crop.</p>
Full article ">Figure 6
<p>Temporal VH backscatter profile of main crops followed by different catch crop categories: (<b>a</b>) cold-tolerant, (<b>b</b>) cold-sensitive, (<b>c</b>) legumes for the years 2021 (<b>left</b>) and 2022 (<b>right</b>). The vertical dashed lines (black color) indicate the harvest of main crop.</p>
Full article ">Figure 7
<p>Temporal VH/VV backscatter profile of main crops followed by different catch crop categories: (<b>a</b>) cold-tolerant, (<b>b</b>) cold-sensitive, (<b>c</b>) legumes for the years 2021 (<b>left</b>) and 2022 (<b>right</b>). The vertical dashed lines (black color) indicate the harvest of main crop.</p>
Full article ">Figure 8
<p>Temporal dpRVI profile of main crops followed by different catch crop categories: (<b>a</b>) cold-tolerant, (<b>b</b>) cold-sensitive, (<b>c</b>) legumes for the years 2021 (<b>left</b>) and 2022 (<b>right</b>). The vertical dashed lines (black color) indicate the harvest of main crop.</p>
Full article ">Figure 9
<p>Temporal VV coherence profile of main crops followed by different catch crop categories: (<b>a</b>) cold-tolerant, (<b>b</b>) cold-sensitive, (<b>c</b>) legumes for the years 2021 (<b>left</b>) and 2022 (<b>right</b>). The vertical dashed lines (black color) indicate the harvest of main crop.</p>
Full article ">Figure 10
<p>Comparison of mean VV backscatter profiles: (<b>a</b>) winter oilseed rape, (<b>b</b>) winter cereals, (<b>c</b>) fallow/catch crop. The dashed lines (black) indicate the sowing time.</p>
Full article ">Figure 11
<p>Comparison of VH backscatter profiles: (<b>a</b>) winter oilseed rape, (<b>b</b>) winter cereals, (<b>c</b>) fallow/catch crop. The dashed lines (black) indicate the sowing time.</p>
Full article ">Figure 12
<p>Comparison of VH/VV backscatter profiles: (<b>a</b>) winter oilseed rape, (<b>b</b>) winter cereals, (<b>c</b>) fallow/catch crop. The dashed lines (black) indicate the sowing time.</p>
Full article ">Figure 13
<p>Comparison of dpRVI profiles: (<b>a</b>) oilseed rape, (<b>b</b>) winter cereals, (<b>c</b>) fallow/catch crop. The doted lines (black) indicate the sowing time.</p>
Full article ">Figure 14
<p>Box plot depicting the different predictors extracted from dpRVI time series for different catch crop types.</p>
Full article ">Figure 15
<p>Box plot depicting the different predictors extracted from VV, VH and VH/VV backscatter time series for different catch crop types.</p>
Full article ">Figure A1
<p>Kruskal Wallis H and <span class="html-italic">p</span>-value statistics for each predictor based on individual crop-wise test. *, **, *** indicate 0.05, 0.01, and 0.001 level of significance.</p>
Full article ">
21 pages, 23010 KiB  
Article
Three-Dimensional Reconstruction of Partially Coherent Scatterers Using Iterative Sub-Network Generation Method
by Xiantao Wang, Zhen Dong, Youjun Wang, Xing Chen and Anxi Yu
Remote Sens. 2024, 16(19), 3707; https://doi.org/10.3390/rs16193707 - 5 Oct 2024
Viewed by 531
Abstract
Synthetic aperture radar tomography (TomoSAR) has gained significant attention for three-dimensional (3D) imaging in urban environments. A notable limitation of traditional TomoSAR approaches is their primary focus on persistent scatterers (PSs), disregarding targets with temporal decorrelated characteristics. Temporal variations in coherence, especially in [...] Read more.
Synthetic aperture radar tomography (TomoSAR) has gained significant attention for three-dimensional (3D) imaging in urban environments. A notable limitation of traditional TomoSAR approaches is their primary focus on persistent scatterers (PSs), disregarding targets with temporal decorrelated characteristics. Temporal variations in coherence, especially in urban areas due to the dense population of buildings and artificial structures, can lead to a reduction in detectable PSs and suboptimal 3D reconstruction performance. The concept of partially coherent scatterers (PCSs) has been proven effective by capturing the partial temporal coherence of targets across the entire time baseline. In this study, an novel approach based on an iterative sub-network generation method is introduced to leverage PCSs for enhanced 3D reconstruction in dynamic environments. We propose a coherence constraint iterative variance analysis approach to determine the optimal temporal baseline range that accurately reflects the interferometric coherence of PCSs. Utilizing the selected PCSs, a 3D imaging technique that incorporates the iterative generation of sub-networks into the SAR tomography process is developed. By employing the PS reference network as a foundation, we accurately invert PCSs through the iterative generation of local star-shaped networks, ensuring a comprehensive coverage of PCSs in study areas. The effectiveness of this method for the height estimation of PCSs is validated using the TerraSAR-X dataset. Compared with traditional PS-based TomoSAR, the proposed approach demonstrates that PCS-based elevation results complement those from PSs, significantly improving 3D reconstruction in evolving urban settings. Full article
(This article belongs to the Special Issue Synthetic Aperture Radar Interferometry Symposium 2024)
Show Figures

Figure 1

Figure 1
<p>Three types of PCSs based on the division of coherence intervals. (<b>a</b>) Appearing-type PCS (APCS). (<b>b</b>) Disappearing-type PCS (DPCS). (<b>c</b>) Visiting-type PCS (VPCS). The red dashed lines represent the image index when coherence of PCS changes.</p>
Full article ">Figure 2
<p>Flowchart of PCS detection based on iterative ANOVA and coherent interval confirmation.</p>
Full article ">Figure 3
<p>Flowchart of the proposed height inversion of the PCSs iterative network generation method. The red dashed box represents the flowchart of the PCS detection method in <a href="#sec3dot1-remotesensing-16-03707" class="html-sec">Section 3.1</a>.</p>
Full article ">Figure 4
<p>Spatial–Temporal baseline distribution of the SAR images. The highlighted red circle represents the master image.</p>
Full article ">Figure 5
<p>SAR and optical image of the study area. (<b>a</b>) SAR average amplitude map. (<b>b</b>) PS distribution map. (<b>c</b>) Google optical image acquired in December 2015. (<b>d</b>) Google optical image acquired in January 2017.</p>
Full article ">Figure 6
<p>PCS distribution map.</p>
Full article ">Figure 7
<p>Distribution of different types of PCSs. (<b>a</b>) APCS. (<b>b</b>) DPCS. The color bar represents the step change positions.</p>
Full article ">Figure 8
<p>The distribution of PCSs in Areas A–D of <a href="#remotesensing-16-03707-f007" class="html-fig">Figure 7</a>, along with the corresponding SAR average amplitude image. (<b>a</b>) Area A. (<b>b</b>) Area B. (<b>c</b>) Area C. (<b>d</b>) Area D. The color bar in this figure is consistent with that in <a href="#remotesensing-16-03707-f007" class="html-fig">Figure 7</a>.</p>
Full article ">Figure 9
<p>Distribution of PS RN and height map of PSs. (<b>a</b>) Distribution of the PS RN. (<b>b</b>) Height map of PSs. The selected reference point with zero height is located on the left side of the road, indicated by a yellow pentagon star.</p>
Full article ">Figure 10
<p>Distribution of the PCS sub-network. (<b>a</b>) APCS sub-network. (<b>b</b>) DPCS sub-network. The yellow edges represent newly generated connections to the PCSs. The blue color indicates the PS RN while yellow color represents the subnetwork.</p>
Full article ">Figure 11
<p>The iterative generation process of the APCS sub-network, where (<b>a</b>–<b>h</b>) represents iteration number 1–8. Red points are PSs and APCSs, blue edges denote edges in the PS RN, and yellow edges indicate those in the newly generated sub-network.</p>
Full article ">Figure 12
<p>Height point cloud map of different types of PCS. (<b>a</b>) APCS. (<b>b</b>) DPCS.</p>
Full article ">Figure 13
<p>(<b>a</b>) Height point cloud map of PS. (<b>b</b>) Unified height point cloud map of PS and APCS. (<b>c</b>) Unified height point cloud map of PS and DPCS. The selected reference point with zero height is the same as in <a href="#remotesensing-16-03707-f010" class="html-fig">Figure 10</a>b.</p>
Full article ">
16 pages, 5920 KiB  
Article
Pixel-Level Decision Fusion for Land Cover Classification Using PolSAR Data and Local Pattern Differences
by Spiros Papadopoulos, Vassilis Anastassopoulos and Georgia Koukiou
Electronics 2024, 13(19), 3846; https://doi.org/10.3390/electronics13193846 - 28 Sep 2024
Viewed by 458
Abstract
Combining various viewpoints to produce coherent and cohesive results requires decision fusion. These methodologies are essential for synthesizing data from multiple sensors in remote sensing classification in order to make conclusive decisions. Using fully polarimetric Synthetic Aperture Radar (PolSAR) imagery, our study combines [...] Read more.
Combining various viewpoints to produce coherent and cohesive results requires decision fusion. These methodologies are essential for synthesizing data from multiple sensors in remote sensing classification in order to make conclusive decisions. Using fully polarimetric Synthetic Aperture Radar (PolSAR) imagery, our study combines the benefits of both approaches for detection by extracting Pauli’s and Krogager’s decomposition components. The Local Pattern Differences (LPD) method was employed on every decomposition component for pixel-level texture feature extraction. These extracted features were utilized to train three independent classifiers. Ultimately, these findings were handled as independent decisions for each land cover type and were fused together using a decision fusion rule to produce complete and enhanced classification results. As part of our approach, after a thorough examination, the most appropriate classifiers and decision rules were exploited, as well as the mathematical foundations required for effective decision fusion. Incorporating qualitative and quantitative information into the decision fusion process ensures robust and reliable classification results. The innovation of our approach lies in the dual use of decomposition methods and the application of a simple but effective decision fusion strategy. Full article
(This article belongs to the Special Issue Artificial Intelligence in Image Processing and Computer Vision)
Show Figures

Figure 1

Figure 1
<p>Study area: the broader area of Vancouver. Map data ©2024: Google, Landsat/Copernicus.</p>
Full article ">Figure 2
<p>Correction of geometric distortions in the ALOS ascending image: (<b>a</b>) amplitude of original image, (<b>b</b>) amplitude of calibrated image, (<b>c</b>) Pauli component, (<b>d</b>) Krogager component, (<b>e</b>) georeferenced Pauli component, and (<b>f</b>) georeferenced Krogager components.</p>
Full article ">Figure 3
<p>RGB representation of our study area: (<b>a</b>) Krogager’s scattering components and (<b>b</b>) Pauli’s scattering components.</p>
Full article ">Figure 4
<p>Illustration of the quantization process of 5 by 5 pixel window. Each of the neighboring pixel’s (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi>g</mi> </mrow> <mrow> <mi>i</mi> </mrow> </msub> </mrow> </semantics></math>) intensities compared with the central’s (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi>g</mi> </mrow> <mrow> <mi>c</mi> </mrow> </msub> </mrow> </semantics></math>) to detect the local patterns. Then, this procedure is repeated for all pixels of our study area.</p>
Full article ">Figure 5
<p>Windows used for classification in our study area, (<b>a</b>) Krogager and (<b>b</b>) Pauli.</p>
Full article ">Figure 6
<p>Clusters of datasets: (<b>a</b>) training dataset, (<b>b</b>) testing dataset. Blue spots: sea, red spots: urban, yellow spots: crops, and green spots: forest.</p>
Full article ">
Back to TopTop