Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (846)

Search Parameters:
Keywords = radar imagery

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 6889 KiB  
Article
Machine Learning-Based Detection of Icebergs in Sea Ice and Open Water Using SAR Imagery
by Zahra Jafari, Pradeep Bobby, Ebrahim Karami and Rocky Taylor
Remote Sens. 2025, 17(4), 702; https://doi.org/10.3390/rs17040702 - 19 Feb 2025
Viewed by 135
Abstract
Icebergs pose significant risks to shipping, offshore oil exploration, and underwater pipelines. Detecting and monitoring icebergs in the North Atlantic Ocean, where darkness and cloud cover are frequent, is particularly challenging. Synthetic aperture radar (SAR) serves as a powerful tool to overcome these [...] Read more.
Icebergs pose significant risks to shipping, offshore oil exploration, and underwater pipelines. Detecting and monitoring icebergs in the North Atlantic Ocean, where darkness and cloud cover are frequent, is particularly challenging. Synthetic aperture radar (SAR) serves as a powerful tool to overcome these difficulties. In this paper, we propose a method for automatically detecting and classifying icebergs in various sea conditions using C-band dual-polarimetric images from the RADARSAT Constellation Mission (RCM) collected throughout 2022 and 2023 across different seasons from the east coast of Canada. This method classifies SAR imagery into four distinct classes: open water (OW), which represents areas of water free of icebergs; open water with target (OWT), where icebergs are present within open water; sea ice (SI), consisting of ice-covered regions without any icebergs; and sea ice with target (SIT), where icebergs are embedded within sea ice. Our approach integrates statistical features capturing subtle patterns in RCM imagery with high-dimensional features extracted using a pre-trained Vision Transformer (ViT), further augmented by climate parameters. These features are classified using XGBoost to achieve precise differentiation between these classes. The proposed method achieves a low false positive rate of 1% for each class and a missed detection rate ranging from 0.02% for OWT to 0.04% for SI and SIT, along with an overall accuracy of 96.5% and an area under curve (AUC) value close to 1. Additionally, when the classes were merged for target detection (combining SI with OW and SIT with OWT), the model demonstrated an even higher accuracy of 98.9%. These results highlight the robustness and reliability of our method for large-scale iceberg detection along the east coast of Canada. Full article
Show Figures

Figure 1

Figure 1
<p>Distribution of targets over date and location.</p>
Full article ">Figure 2
<p>These figures show four sample RGB images from the RCM dataset, where Red = HH, Green = HV, and Blue = (HH-HV)/2. (<b>A</b>,<b>B</b>) depict OW and SI, while (<b>C</b>,<b>D</b>) show icebergs in OW and SI. Only red circles highlight icebergs; other bright pixels represent clutter or sea ice.</p>
Full article ">Figure 3
<p>Block diagram illustrating the proposed system.</p>
Full article ">Figure 4
<p>The impact of despeckling on iceberg images in the HH channel from the SAR dataset, using mean, bilateral, and Lee filters.</p>
Full article ">Figure 5
<p>(<b>A</b>) shows that feature #780 exhibits the most overlap and is considered a weak feature. (<b>B</b>) In contrast, feature #114 is the strongest feature, displaying the least overlap.</p>
Full article ">Figure 6
<p>ROC curves for the evaluated models: (<b>A</b>) ViTFM, (<b>B</b>) StatFM, (<b>C</b>) ViTStatFM, and (<b>D</b>) ViTStatClimFM. The curves illustrate the classification performance across OW, OWT, SI, and SIT categories.</p>
Full article ">Figure 7
<p>Confusion matrices depicting the classification performance of the hybrid model with climate features: (<b>A</b>) represents the classification performance across all four classes, (<b>B</b>) highlights the model’s ability to distinguish between target-containing patches and those without targets, and (<b>C</b>) evaluates the classification of sea ice (SI and SIT) versus open water (OW and OWT).</p>
Full article ">Figure 8
<p>Application of the proposed method to a calibrated RCM image acquired on 23 June 2023. (<b>A</b>) The RCM image overlaid on the Labrador coast. (<b>B</b>) Corresponding ice chart from the Canadian Ice Service for the same region and date. (<b>C</b>) Probability map for OW. (<b>D</b>) Probability map for SI. (<b>E</b>) Probability map for OWT. (<b>F</b>) Probability map for SIT.</p>
Full article ">Figure 9
<p>An extracted section from the full RCM image captured on 23 June 2023, showing icebergs embedded in SI. Red triangles indicate ground truth points, while green circles represent model predictions.</p>
Full article ">Figure 10
<p>Missed targets located near patch borders, illustrating boundary effects. (<b>A</b>) A missed target near the top-left patch border. (<b>B</b>) A missed target within a central region affected by boundary artifacts. (<b>C</b>) A missed target near the bottom-right patch border, highlighting prediction inconsistencies at patch edges.</p>
Full article ">
20 pages, 4530 KiB  
Article
Mapping Forest Aboveground Biomass Using Multi-Source Remote Sensing Data Based on the XGBoost Algorithm
by Dejun Wang, Yanqiu Xing, Anmin Fu, Jie Tang, Xiaoqing Chang, Hong Yang, Shuhang Yang and Yuanxin Li
Forests 2025, 16(2), 347; https://doi.org/10.3390/f16020347 - 15 Feb 2025
Viewed by 228
Abstract
Aboveground biomass (AGB) serves as an important indicator for assessing the productivity of forest ecosystems and exploring the global carbon cycle. However, accurate estimation of forest AGB remains a significant challenge, especially when integrating multi-source remote sensing data, and the effects of different [...] Read more.
Aboveground biomass (AGB) serves as an important indicator for assessing the productivity of forest ecosystems and exploring the global carbon cycle. However, accurate estimation of forest AGB remains a significant challenge, especially when integrating multi-source remote sensing data, and the effects of different feature combinations for AGB estimation results are unclear. In this study, we proposed a method for estimating forest AGB by combining Gao Fen 7 (GF-7) stereo imagery with data from Sentinel-1 (S1), Sentinel-2 (S2), and the Advanced Land Observing Satellite digital elevation model (ALOS DEM), and field survey data. The continuous tree height (TH) feature was derived using GF-7 stereo imagery and the ALOS DEM. Spectral features were extracted from S1 and S2, and topographic features were extracted from the ALOS DEM. Using these features, 15 feature combinations were constructed. The recursive feature elimination (RFE) method was used to optimize each feature combination, which was then input into the extreme gradient boosting (XGBoost) model for AGB estimation. Different combinations of features used to estimate forest AGB were compared. The best model was selected for mapping AGB distribution at 30 m resolution. The outcomes showed that the forest AGB model was composed of 13 features, including TH, topographic, and spectral features extracted from S1 and S2 data. This model achieved the best prediction performance, with a determination coefficient (R2) of 0.71 and a root mean square error (RMSE) of 18.11 Mg/ha. TH was found to be the most important predictive feature, followed by S2 optical features, topographic features, and S1 radar features. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>True-color image map of the study area. The red and blue lines show the spatial coverage of the forward and backward images of GF-7, respectively. The red, green, and blue triangular markers indicate the locations of the 2012, 2022, and 2024 sampling data, respectively.</p>
Full article ">Figure 2
<p>Flowchart for the overall workflow of estimating forest AGB using combined multi-source remote sensing data.</p>
Full article ">Figure 3
<p>Distribution of GCPs and TPs used for DSM generation. (<b>a</b>) Ground control points, (<b>b</b>) Tie points.</p>
Full article ">Figure 4
<p>Performance of XGBoost models for AGB estimation using different feature combinations. (<b>a</b>) TS1S2D, (<b>b</b>) TS2D, (<b>c</b>) TS1S2.</p>
Full article ">Figure 5
<p>Relative importance ranking of features based on the XGBoost model built with the TS1S2D feature combination.</p>
Full article ">Figure 6
<p>Spatial distribution of AGB in the research area. (<b>a</b>) Distribution of AGB predicted by the XGBoost model with TS1S2D combination; (<b>b</b>) S2 true-color image of the region within the red box; (<b>c</b>) zoomed-in view of the red box in the AGB distribution map.</p>
Full article ">Figure 7
<p>Forest AGB estimation based on XGBoost model and TS1S2D feature combination. (<b>a</b>) Coniferous forests, (<b>b</b>) Broadleaf forests.</p>
Full article ">Figure 8
<p>Feature importance rankings for different forest types based on the XGBoost model and TS1S2D feature combination. (<b>a</b>) Coniferous forests, (<b>b</b>) Broadleaf forests.</p>
Full article ">Figure 9
<p>AGB difference maps. (<b>a</b>) TS2D–TS1S2D, (<b>b</b>) TS1S2–TS1S2D.</p>
Full article ">Figure 10
<p>Comparison of AGB distributions in this study and published datasets (Zhang et al. [<a href="#B41-forests-16-00347" class="html-bibr">41</a>], Yang et al. [<a href="#B44-forests-16-00347" class="html-bibr">44</a>], Chang et al. [<a href="#B52-forests-16-00347" class="html-bibr">52</a>]). The horizontal line in each box plot represents the median, the black dot indicates the mean, and the width of the violin plot reflects the data proportion.</p>
Full article ">
24 pages, 11584 KiB  
Article
Method for Landslide Area Detection with RVI Data Which Indicates Base Soil Areas Changed from Vegetated Areas
by Kohei Arai, Yushin Nakaoka and Hiroshi Okumura
Remote Sens. 2025, 17(4), 628; https://doi.org/10.3390/rs17040628 - 12 Feb 2025
Viewed by 325
Abstract
This study investigates the use of the radar vegetation index (RVI) derived from Sentinel-1 synthetic aperture radar (SAR) data for landslide detection. Traditional landslide detection methods often rely on the Normalized Difference Vegetation Index (NDVI) derived from optical imagery, which is susceptible to [...] Read more.
This study investigates the use of the radar vegetation index (RVI) derived from Sentinel-1 synthetic aperture radar (SAR) data for landslide detection. Traditional landslide detection methods often rely on the Normalized Difference Vegetation Index (NDVI) derived from optical imagery, which is susceptible to limitations imposed by weather conditions (clouds, rain) and nighttime. In contrast, SAR data, acquired by Sentinel-1, provides all-weather, day-and-night coverage. To leverage this advantage, we propose a novel approach utilizing RVI, a vegetation index calculated from SAR data, to identify non-vegetated areas, which often indicate potential landslide zones. To enhance the accuracy of non-vegetated area classification, we employ the high-performing EfficientNetV2 deep learning model. We evaluated the classification performance of EfficientNetV2 using RVI derived from Sentinel-1 SAR data with VV and VH polarizations. Experiments were conducted on SAR imagery of the Iburi district in Hokkaido, Japan, severely impacted by an earthquake in 2018. Our findings demonstrate that the classification performance using RVI with both VV and VH polarizations significantly surpasses that of using VV and VH polarizations alone. These results highlight the effectiveness of RVI for identifying non-vegetated areas, particularly in landslide detection scenarios. The proposed RVI-based method has broader applications beyond landslide detection, including other disaster area assessments, agricultural field monitoring, and forest inventory. Full article
Show Figures

Figure 1

Figure 1
<p>Photo of the landslides due to Iburi earthquake which occurred in central Hokkaido, Japan at 3:07 a.m. on 6 September 2018. (<b>a</b>) Photo of landslide disasters (<a href="https://www.jma-net.go.jp/sapporo/jishin/iburi_tobu.html" target="_blank">https://www.jma-net.go.jp/sapporo/jishin/iburi_tobu.html</a>, accessed on 15 January 2025); (<b>b</b>) Google map of the intensive study area.</p>
Full article ">Figure 2
<p>Process flow of the proposed method.</p>
Full article ">Figure 3
<p>VV and VH polarization data of Sentinel-1/SAR data of Atsuma, Hokkaido acquired on 13 September 2018 (After earthquake) and 1 September 2018 (before earthquake). (<b>a</b>) VH polarization, 13 September 2018; (<b>b</b>) VV polarization, 13 September 2018; (<b>c</b>) VH polarization, 1 September 2018; (<b>d</b>) VH polarization, 1 September 2018.</p>
Full article ">Figure 3 Cont.
<p>VV and VH polarization data of Sentinel-1/SAR data of Atsuma, Hokkaido acquired on 13 September 2018 (After earthquake) and 1 September 2018 (before earthquake). (<b>a</b>) VH polarization, 13 September 2018; (<b>b</b>) VV polarization, 13 September 2018; (<b>c</b>) VH polarization, 1 September 2018; (<b>d</b>) VH polarization, 1 September 2018.</p>
Full article ">Figure 3 Cont.
<p>VV and VH polarization data of Sentinel-1/SAR data of Atsuma, Hokkaido acquired on 13 September 2018 (After earthquake) and 1 September 2018 (before earthquake). (<b>a</b>) VH polarization, 13 September 2018; (<b>b</b>) VV polarization, 13 September 2018; (<b>c</b>) VH polarization, 1 September 2018; (<b>d</b>) VH polarization, 1 September 2018.</p>
Full article ">Figure 4
<p>The RVI calculated with the VV and VH polarization of backscatter signals based on Equation (1) of Iburi, Atsuma in Hokkaido acquired on (<b>a</b>) 13 September 2018 and on (<b>b</b>) 1 September 2018.</p>
Full article ">Figure 5
<p>Small portions of training, validation, and test samples of RVI imagery data for landslide areas and those for non-landslide areas. (<b>a</b>) A small portion of training samples of RVI imagery data for landslide areas; (<b>b</b>) a small portion of training samples of the RVI imagery data for non-landslide areas; (<b>c</b>) a small portion of validation samples of the RVI imagery data for landslide areas; (<b>d</b>) a small portion of validation samples of the RVI imagery data for non-landslide areas; (<b>e</b>) a small portion of test samples of the RVI imagery data for landslide areas; (<b>f</b>) a small portion of test samples of the RVI imagery data for non-landslide areas.</p>
Full article ">Figure 6
<p>Small portions of training, validation, and test samples of the VH polarization of the imagery data for landslide areas and those for non-landslide areas. (<b>a</b>) A small portion of training samples from the VH polarization of the imagery data for landslide areas; (<b>b</b>) a small portion of training samples from the VH polarization of the imagery data for non-landslide areas; (<b>c</b>) a small portion of validation samples from the VH polarization of the imagery data for landslide areas; (<b>d</b>) a small portion of validation samples from the VH polarization of the imagery data for non-landslide areas; (<b>e</b>) a small portion of test samples from the VH polarization of the imagery data for landslide areas; (<b>f</b>) a small portion of test samples from the VH polarization of the imagery data for non-landslide areas.</p>
Full article ">Figure 6 Cont.
<p>Small portions of training, validation, and test samples of the VH polarization of the imagery data for landslide areas and those for non-landslide areas. (<b>a</b>) A small portion of training samples from the VH polarization of the imagery data for landslide areas; (<b>b</b>) a small portion of training samples from the VH polarization of the imagery data for non-landslide areas; (<b>c</b>) a small portion of validation samples from the VH polarization of the imagery data for landslide areas; (<b>d</b>) a small portion of validation samples from the VH polarization of the imagery data for non-landslide areas; (<b>e</b>) a small portion of test samples from the VH polarization of the imagery data for landslide areas; (<b>f</b>) a small portion of test samples from the VH polarization of the imagery data for non-landslide areas.</p>
Full article ">Figure 7
<p>Small portions of training, validation, and test samples of the VV polarization of the imagery data for landslide areas and those for non-landslide areas. (<b>a</b>) A small portion of training samples from the VV polarization of the imagery data for landslide areas; (<b>b</b>) a small portion of training samples from the VV polarization of the imagery data for non-landslide areas; (<b>c</b>) a small portion of validation samples from the VV polarization of imagery data for landslide areas; (<b>d</b>) a small portion of validation samples from the VV polarization of the imagery data for non-landslide areas; (<b>e</b>) a small portion of test samples from the VV polarization of the imagery data for landslide areas; (<b>f</b>) a small portion of test samples from the VV polarization of the imagery data for non-landslide areas.</p>
Full article ">Figure 8
<p>Sentinel-2/MSI NDVI data were acquired on 5 September 2018 (just before the earthquake) and on 15 September 2018 (after the earthquake). (<b>a</b>) 5 September 2018; (<b>b</b>) 15 September 2018; (<b>c</b>) Color Scale.</p>
Full article ">Figure 8 Cont.
<p>Sentinel-2/MSI NDVI data were acquired on 5 September 2018 (just before the earthquake) and on 15 September 2018 (after the earthquake). (<b>a</b>) 5 September 2018; (<b>b</b>) 15 September 2018; (<b>c</b>) Color Scale.</p>
Full article ">Figure 9
<p>Classification performances of EfficientNetV2 for the landslide area or not in cases of using RVI, VH polarization, and VV polarization data used. (<b>a</b>) Case (1): RVI; (<b>b</b>) Case (1): RVI; (<b>c</b>) Case (1): RVI; (<b>d</b>) Case (2): VH polarization only; (<b>e</b>) Case (2): VH polarization only; (<b>f</b>) Case (2): VH polarization only; (<b>g</b>) Case (3): VV polarization only; (<b>h</b>) Case (3): VV polarization only; (<b>i</b>) Case (3): VV polarization only.</p>
Full article ">Figure 9 Cont.
<p>Classification performances of EfficientNetV2 for the landslide area or not in cases of using RVI, VH polarization, and VV polarization data used. (<b>a</b>) Case (1): RVI; (<b>b</b>) Case (1): RVI; (<b>c</b>) Case (1): RVI; (<b>d</b>) Case (2): VH polarization only; (<b>e</b>) Case (2): VH polarization only; (<b>f</b>) Case (2): VH polarization only; (<b>g</b>) Case (3): VV polarization only; (<b>h</b>) Case (3): VV polarization only; (<b>i</b>) Case (3): VV polarization only.</p>
Full article ">Figure 9 Cont.
<p>Classification performances of EfficientNetV2 for the landslide area or not in cases of using RVI, VH polarization, and VV polarization data used. (<b>a</b>) Case (1): RVI; (<b>b</b>) Case (1): RVI; (<b>c</b>) Case (1): RVI; (<b>d</b>) Case (2): VH polarization only; (<b>e</b>) Case (2): VH polarization only; (<b>f</b>) Case (2): VH polarization only; (<b>g</b>) Case (3): VV polarization only; (<b>h</b>) Case (3): VV polarization only; (<b>i</b>) Case (3): VV polarization only.</p>
Full article ">
24 pages, 13033 KiB  
Article
Detection of Parabolic Antennas in Satellite Inverse Synthetic Aperture Radar Images Using Component Prior and Improved-YOLOv8 Network in Terahertz Regime
by Liuxiao Yang, Hongqiang Wang, Yang Zeng, Wei Liu, Ruijun Wang and Bin Deng
Remote Sens. 2025, 17(4), 604; https://doi.org/10.3390/rs17040604 - 10 Feb 2025
Viewed by 335
Abstract
Inverse Synthetic Aperture Radar (ISAR) images of space targets and their key components are very important. However, this method suffers from numerous drawbacks, including a low Signal-to-Noise Ratio (SNR), blurred edges, significant variations in scattering intensity, and limited data availability, all of which [...] Read more.
Inverse Synthetic Aperture Radar (ISAR) images of space targets and their key components are very important. However, this method suffers from numerous drawbacks, including a low Signal-to-Noise Ratio (SNR), blurred edges, significant variations in scattering intensity, and limited data availability, all of which constrain its recognition capabilities. The terahertz (THz) regime has reflected excellent capacity for space detection in terms of showing the details of target structures. However, in ISAR images, as the observation aperture moves, the imaging features of the extended structures (ESs) undergo significant changes, posing challenges to the subsequent recognition performance. In this paper, a parabolic antenna is taken as the research object. An innovative approach for identifying this component is proposed by using the advantages of the Component Prior and Imaging Characteristics (CPICs) effectively. In order to tackle the challenges associated with component identification in satellite ISAR imagery, this study employs the Improved-YOLOv8 model, which was developed by incorporating the YOLOv8 algorithm, an adaptive detection head known as the Dynamic head (Dyhead) that utilizes an attention mechanism, and a regression box loss function called Wise Intersection over Union (WIoU), which addresses the issue of varying sample difficulty. After being trained on the simulated dataset, the model demonstrated a considerable enhancement in detection accuracy over the five base models, reaching an mAP50 of 0.935 and an mAP50-95 of 0.520. Compared with YOLOv8n, it improved by 0.192 and 0.076 in mAP50 and mAP50-95, respectively. Ultimately, the effectiveness of the suggested method is confirmed through the execution of comprehensive simulations and anechoic chamber tests. Full article
(This article belongs to the Special Issue Advanced Spaceborne SAR Processing Techniques for Target Detection)
Show Figures

Figure 1

Figure 1
<p>The overall framework diagram of the proposed method.</p>
Full article ">Figure 2
<p>The observational geometry for space-based terahertz radar in detecting space targets.</p>
Full article ">Figure 3
<p>Geometry projection diagram of ISAR imaging.</p>
Full article ">Figure 4
<p>Parabolic antenna imaging characteristics. (<b>a</b>) Three typical observation apertures. (<b>b</b>) Scattering intensity versus azimuth angle. (<b>c</b>) The specular point. (<b>d</b>) The edge pair-points. (<b>e</b>) The ellipse arc.</p>
Full article ">Figure 5
<p>Satellite CAD model with 5 main scattering components (<b>left</b>) and its geometry and size (<b>right</b>).</p>
Full article ">Figure 6
<p>Imaging results and corresponding CAD under three typical observation apertures.</p>
Full article ">Figure 7
<p>Structure diagram of Improved-YOLOv8.</p>
Full article ">Figure 8
<p>Structure diagram of Dyhead.</p>
Full article ">Figure 9
<p>The training samples under different apertures.</p>
Full article ">Figure 10
<p>The distribution of bounding boxes within the dataset.</p>
Full article ">Figure 11
<p>The mAP50 (left) and mAP50-95 (right) of different networks in the training set.</p>
Full article ">Figure 12
<p>A comparison of the detection performance of different algorithms on EM data.</p>
Full article ">Figure 13
<p>mAP50 and mAP50-95 of different networks.</p>
Full article ">Figure 14
<p>PR curves for three different objects.</p>
Full article ">Figure 15
<p>Anechoic chamber experiment and satellite mock-up presentation. (<b>a</b>) Terahertz radar technology system. (<b>b</b>) Satellite model for anechoic chamber experiment.</p>
Full article ">Figure 16
<p>Comparison of performance between different networks on anechoic chamber data.</p>
Full article ">
16 pages, 7741 KiB  
Article
Millimeter-Wave SAR Imaging for Sub-Millimeter Defect Detection with Non-Destructive Testing
by Bengisu Yalcinkaya, Elif Aydin and Ali Kara
Electronics 2025, 14(4), 689; https://doi.org/10.3390/electronics14040689 - 10 Feb 2025
Viewed by 355
Abstract
This paper introduces a high-resolution 77–81 GHz mmWave Synthetic Aperture Radar (SAR) imaging methodology integrating low-cost hardware with modified radar signal characteristics specifically for NDT applications. The system is optimized to detect minimal defects in materials, including low-reflectivity ones. In contrast to the [...] Read more.
This paper introduces a high-resolution 77–81 GHz mmWave Synthetic Aperture Radar (SAR) imaging methodology integrating low-cost hardware with modified radar signal characteristics specifically for NDT applications. The system is optimized to detect minimal defects in materials, including low-reflectivity ones. In contrast to the existing studies, by optimizing key system parameters, including frequency slope, sampling interval, and scanning aperture, high-resolution SAR images are achieved with reduced computational complexity and storage requirements. The experiments demonstrate the effectiveness of the system in detecting optically undetectable minimal surface defects down to 0.4 mm, such as bonded adhesive lines on low-reflectivity materials with 2500 measurement points and sub-millimeter features on metallic targets at a distance of 30 cm. The results show that the proposed system achieves comparable or superior image quality to existing high-cost setups while requiring fewer data points and simpler signal processing. Low-cost, low-complexity, and easy-to-build mmWave SAR imaging is constructed for high-resolution SAR imagery of targets with a focus on detecting defects in low-reflectivity materials. This approach has significant potential for practical NDT applications with a unique emphasis on scalability, cost-effectiveness, and enhanced performance on low-reflectivity materials for industries such as manufacturing, civil engineering, and 3D printing. Full article
Show Figures

Figure 1

Figure 1
<p>High-level workflow.</p>
Full article ">Figure 2
<p>SAR imaging geometry.</p>
Full article ">Figure 3
<p>SAR system and measurement environment.</p>
Full article ">Figure 4
<p>Structural architecture of the scanner.</p>
Full article ">Figure 5
<p>High-level system architecture and data flow.</p>
Full article ">Figure 6
<p>(<b>a</b>) Optical image; (<b>b</b>) proposed parameters; (<b>c</b>) parameters in [<a href="#B38-electronics-14-00689" class="html-bibr">38</a>]; (<b>d</b>) parameters in [<a href="#B10-electronics-14-00689" class="html-bibr">10</a>].</p>
Full article ">Figure 7
<p>SAR imaging results for different scanning parameters.</p>
Full article ">Figure 8
<p>Colored and grayscale high-resolution SAR imaging results for the metal square plate with holes.</p>
Full article ">Figure 9
<p>Demonstration of the defective line on the PLA plate.</p>
Full article ">Figure 10
<p>SAR imaging results of PLA plate for different scanning scenarios. (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>D</mi> </mrow> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> </msub> <mo>=</mo> <mn>100</mn> <mi mathvariant="normal">m</mi> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>d</mi> </mrow> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> </msub> <mo>=</mo> <mn>2</mn> <mi mathvariant="normal">m</mi> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>, 2500 measurement points. (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>D</mi> </mrow> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> </msub> <mo>=</mo> <mn>200</mn> <mi mathvariant="normal">m</mi> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>d</mi> </mrow> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> </msub> <mo>=</mo> <mn>2</mn> <mi mathvariant="normal">m</mi> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>, 10,000 measurement points.</p>
Full article ">
22 pages, 15578 KiB  
Article
Analysis of Ground Subsidence Evolution Characteristics and Attribution Along the Beijing–Xiong’an Intercity Railway with Time-Series InSAR and Explainable Machine-Learning Technique
by Xin Liu, Huili Gong, Chaofan Zhou, Beibei Chen, Yanmin Su, Jiajun Zhu and Wei Lu
Land 2025, 14(2), 364; https://doi.org/10.3390/land14020364 - 10 Feb 2025
Viewed by 287
Abstract
The long-term overextraction of groundwater in the Beijing–Tianjin–Hebei region has led to the formation of the world’s largest groundwater depression cone and the most extensive land subsidence zone, posing a potential threat to the operational safety of high-speed railways in the region. As [...] Read more.
The long-term overextraction of groundwater in the Beijing–Tianjin–Hebei region has led to the formation of the world’s largest groundwater depression cone and the most extensive land subsidence zone, posing a potential threat to the operational safety of high-speed railways in the region. As a critical transportation hub connecting Beijing and the Xiong’an New Area, the Beijing–Xiong’an Intercity Railway traverses geologically complex areas with significant ground subsidence issues. Monitoring and analyzing the causes of land subsidence along the railway are essential for ensuring its safe operation. Using Sentinel-1A radar imagery, this study applies PS-InSAR technology to extract the spatiotemporal evolution characteristics of ground subsidence along the railway from 2016 to 2022. By employing a buffer zone analysis and profile analysis, the subsidence patterns at different stages (pre-construction, construction, and operation) are revealed, identifying the major subsidence cones along the Yongding River, Yongqing, Daying, and Shengfang regions, and their impacts on the railway. Furthermore, the XGBoost model and SHAP method are used to quantify the primary influencing factors of land subsidence. The results show that changes in confined water levels are the most significant factor, contributing 34.5%, with strong interactions observed between the compressible layer thickness and confined water levels. The subsidence gradient analysis indicates that the overall subsidence gradient along the Beijing–Xiong’an Intercity Railway currently meets safety standards. This study provides scientific evidence for risk prevention and the control of land subsidence along the railway and holds significant implications for ensuring the safety of high-speed rail operations. Full article
(This article belongs to the Special Issue Assessing Land Subsidence Using Remote Sensing Data)
Show Figures

Figure 1

Figure 1
<p>Study area and the extent of SAR imagery.</p>
Full article ">Figure 2
<p>Ps-InSAR technology roadmap.</p>
Full article ">Figure 3
<p>XGBoost technology roadmap.</p>
Full article ">Figure 4
<p>Time-series evolution trend of subsidence in the background area along the Beijing–Xiong’an Intercity Railway.</p>
Full article ">Figure 5
<p>Accuracy verification of InSAR monitoring subsidence results with leveling precision.</p>
Full article ">Figure 6
<p>Annual average subsidence rates in the buffer zone along the Beijing–Xiong’an Intercity Railway across different periods.</p>
Full article ">Figure 7
<p>Relationship between annual average subsidence rates along the profile of the Beijing–Xiong’an Intercity Railway and the distribution of subsidence funnels.</p>
Full article ">Figure 8
<p>Annual average subsidence rates of the Beijing–Xiong’an Intercity Railway cross-section during different periods (The dashed lines of different colors represent areas where the sedimentation rate varies greatly).</p>
Full article ">Figure 9
<p>Cumulative subsidence at stations along the Beijing–Xiong’an Intercity Railway.</p>
Full article ">Figure 10
<p>Variations in annual average subsidence rates of stations along the Beijing–Xiong’an Intercity Railway across different periods.</p>
Full article ">Figure 11
<p>Distribution of monitoring points on both sides of stations along the Beijing–Xiong’an Intercity Railway.</p>
Full article ">Figure 12
<p>Subsidence rate differences between the eastern and western sides of Xiong’an Station and Bazhou North Station.</p>
Full article ">Figure 13
<p>Subsidence rate differences between the eastern and western sides of Gu’an East Station, Daxing Airport Station, and Beijing Daxing Station.</p>
Full article ">Figure 14
<p>Accuracy validation based on the Extreme Gradient Boosting (XGBoost) model.</p>
Full article ">Figure 15
<p>Distribution of the importance of subsidence-influencing factors for the Beijing–Xiong’an Intercity Railway.</p>
Full article ">Figure 16
<p>SHAP interpretability analysis of overall subsidence characteristics.</p>
Full article ">Figure 17
<p>SHAP interpretability analysis of factor interactions.</p>
Full article ">Figure 18
<p>Variations in subsidence gradient along the Beijing–Xiong’an Intercity Railway.</p>
Full article ">
21 pages, 25589 KiB  
Article
Robust and Efficient SAR Ship Detection: An Integrated Despecking and Detection Framework
by Yulin Chen, Yanyun Shen, Chi Duan, Zhipan Wang, Zewen Mo, Yingyu Liang and Qingling Zhang
Remote Sens. 2025, 17(4), 580; https://doi.org/10.3390/rs17040580 - 8 Feb 2025
Viewed by 284
Abstract
Deep-learning-based ship detection methods in Synthetic Aperture Radar (SAR) imagery are a current research hotspot. However, these methods rely on high-quality images as input, and in practical applications, SAR images are interfered with by speckle noise, leading to a decrease in image quality [...] Read more.
Deep-learning-based ship detection methods in Synthetic Aperture Radar (SAR) imagery are a current research hotspot. However, these methods rely on high-quality images as input, and in practical applications, SAR images are interfered with by speckle noise, leading to a decrease in image quality and thus affecting detection accuracy. To address this problem, we propose a unified framework for ship detection that incorporates a despeckling module into the object detection network. This integration is designed to enhance the detection performance, even with low-quality SAR images that are affected by speckle noise. Secondly, we propose a Multi-Scale Window Swin Transformer module. This module is adept at improving image quality by effectively capturing both global and local features of the SAR images. Additionally, recognizing the challenges associated with the scarcity of labeled data in practical scenarios, we employ an unlabeled distillation learning method to train our despeckling module. This technique avoids the need for extensive manual labeling and making efficient use of unlabeled data. We have tested the robustness of our method using public SAR datasets, including SSDD and HRSID, as well as a newly constructed dataset, the RSSDD. The results demonstrate that our method not only achieves a state-of-the-art performance but also excels in conditions with low signal-to-noise ratios. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Overall framework of our method.</p>
Full article ">Figure 2
<p>Framework of teacher–student model, where * denotes the multiplication sign.</p>
Full article ">Figure 3
<p>Structure of Multi-scale Window Swin Transformer (MSW Swin Transformer) Module.</p>
Full article ">Figure 4
<p>Mobile window reorganization.</p>
Full article ">Figure 5
<p>Structure of LFE module, where * denotes the multiplication sign.</p>
Full article ">Figure 6
<p>Ship detection model with noise robustness based on YOLOv5.</p>
Full article ">Figure 7
<p>Samples from RSSDD.</p>
Full article ">Figure 8
<p>The detection results of our method based on YOLOv5 with different noise parameters L = [1,2,3,4]. (<b>a</b>) Ground truth, (<b>b</b>) L = 4, (<b>c</b>) L = 3, (<b>d</b>) L = 2, (<b>e</b>) L = 1. The green box represents ground truth box and the red box represents prediction box.</p>
Full article ">Figure 9
<p>Detection results with Centernet. (<b>a</b>) Ground truth, detection results of (<b>b</b>) baseline model, (<b>c</b>) SARCNN, (<b>d</b>) SARDRN, (<b>e</b>) Gamma filter, (<b>f</b>) Lee filter, (<b>g</b>) Frost filter, and (<b>h</b>) proposed method. The green box represents ground truth box and the red box represents prediction box.</p>
Full article ">Figure 10
<p>Visualization of object detection algorithm in SSDD and RSSDD. (<b>a</b>) Ground truth, (<b>b</b>) baseline, (<b>c</b>) Gamma, (<b>d</b>) ours. The green box represents ground truth box and the red box represents prediction box.</p>
Full article ">
13 pages, 2045 KiB  
Article
A Hardware Accelerator for Real-Time Processing Platforms Used in Synthetic Aperture Radar Target Detection Tasks
by Yue Zhang, Yunshan Tang, Yue Cao and Zhongjun Yu
Micromachines 2025, 16(2), 193; https://doi.org/10.3390/mi16020193 - 7 Feb 2025
Viewed by 428
Abstract
The deep learning object detection algorithm has been widely applied in the field of synthetic aperture radar (SAR). By utilizing deep convolutional neural networks (CNNs) and other techniques, these algorithms can effectively identify and locate targets in SAR images, thereby improving the accuracy [...] Read more.
The deep learning object detection algorithm has been widely applied in the field of synthetic aperture radar (SAR). By utilizing deep convolutional neural networks (CNNs) and other techniques, these algorithms can effectively identify and locate targets in SAR images, thereby improving the accuracy and efficiency of detection. In recent years, achieving real-time monitoring of regions has become a pressing need, leading to the direct completion of real-time SAR image target detection on airborne or satellite-borne real-time processing platforms. However, current GPU-based real-time processing platforms struggle to meet the power consumption requirements of airborne or satellite applications. To address this issue, a low-power, low-latency deep learning SAR object detection algorithm accelerator was designed in this study to enable real-time target detection on airborne and satellite SAR platforms. This accelerator proposes a Process Engine (PE) suitable for multidimensional convolution parallel computing, making full use of Field-Programmable Gate Array (FPGA) computing resources to reduce convolution computing time. Furthermore, a unique memory arrangement design based on this PE aims to enhance memory read/write efficiency while applying dataflow patterns suitable for FPGA computing to the accelerator to reduce computation latency. Our experimental results demonstrate that deploying the SAR object detection algorithm based on Yolov5s on this accelerator design, mounted on a Virtex 7 690t chip, consumes only 7 watts of dynamic power, achieving the capability to detect 52.19 512 × 512-sized SAR images per second. Full article
(This article belongs to the Section E:Engineering and Technology)
Show Figures

Figure 1

Figure 1
<p>The structure of the entire accelerator.</p>
Full article ">Figure 2
<p>The specific breakdown of the convolution operation.</p>
Full article ">Figure 3
<p>The data input order of the PE. (<b>a</b>) The data input corresponding to Loop6. (<b>b</b>) The data input corresponding to Loop5. (<b>c</b>) The data input corresponding to Loop4. (<b>d</b>) The data input corresponding to Loop3. (<b>e</b>) The data input corresponding to Loop2. (<b>f</b>) The data input corresponding to Loop1.</p>
Full article ">Figure 4
<p>The hardware structure of the PE.</p>
Full article ">Figure 5
<p>The struct of Buffer. (<b>a</b>) The struct of FmBuffer. (<b>b</b>) The struct of WtBuffer.</p>
Full article ">Figure 6
<p>The memory arrangements. (<b>a</b>) The memory arrangement of FeatureMap. (<b>b</b>) The memory arrangement of Weight.</p>
Full article ">Figure 7
<p>The streaming computing mode.</p>
Full article ">Figure 8
<p>The figure of dynamic power.</p>
Full article ">
26 pages, 13415 KiB  
Article
A Methodology for the Multitemporal Analysis of Land Cover Changes and Urban Expansion Using Synthetic Aperture Radar (SAR) Imagery: A Case Study of the Aburrá Valley in Colombia
by Ahmed Alejandro Cardona-Mesa, Rubén Darío Vásquez-Salazar, Juan Camilo Parra, César Olmos-Severiche, Carlos M. Travieso-González and Luis Gómez
Remote Sens. 2025, 17(3), 554; https://doi.org/10.3390/rs17030554 - 6 Feb 2025
Viewed by 554
Abstract
The Aburrá Valley, located in the northwestern region of Colombia, has undergone significant land cover changes and urban expansion in recent decades, driven by rapid population growth and infrastructure development. This region, known for its steep topography and dense urbanization, faces considerable environmental [...] Read more.
The Aburrá Valley, located in the northwestern region of Colombia, has undergone significant land cover changes and urban expansion in recent decades, driven by rapid population growth and infrastructure development. This region, known for its steep topography and dense urbanization, faces considerable environmental challenges. Monitoring these transformations is essential for informed territorial planning and sustainable development. This study leverages Synthetic Aperture Radar (SAR) imagery from the Sentinel-1 mission, covering 2017–2024, to propose a methodology for the multitemporal analysis of land cover dynamics and urban expansion in the valley. The novel proposed methodology comprises several steps: first, monthly SAR images were acquired for every year under study from 2017 to 2024, ensuring the capture of surface changes. These images were properly calibrated, rescaled, and co-registered. Then, various multitemporal fusions using statistics operations were proposed to detect and find different phenomena related to land cover and urban expansion. The methodology also involved statistical fusion techniques—median, mean, and standard deviation—to capture urbanization dynamics. The kurtosis calculations highlighted areas where infrequent but significant changes occurred, such as large-scale construction projects or sudden shifts in land use, providing a statistical measure of surface variability throughout the study period. An advanced clustering technique segmented images into distinctive classes, utilizing fuzzy logic and a kernel-based method, enhancing the analysis of changes. Additionally, Pearson correlation coefficients were calculated to explore the relationships between identified land cover change classes and their spatial distribution across nine distinct geographic zones in the Aburrá Valley. The results highlight a marked increase in urbanization, particularly along the valley’s periphery, where previously vegetated areas have been replaced by built environments. Additionally, the visual inspection analysis revealed areas of high variability near river courses and industrial zones, indicating ongoing infrastructure and construction projects. These findings emphasize the rapid and often unplanned nature of urban growth in the region, posing challenges to both natural resource management and environmental conservation efforts. The study underscores the need for the continuous monitoring of land cover changes using advanced remote sensing techniques like SAR, which can overcome the limitations posed by cloud cover and rugged terrain. The conclusions drawn suggest that SAR-based multitemporal analysis is a robust tool for detecting and understanding urbanization’s spatial and temporal dynamics in regions like the Aburrá Valley, providing vital data for policymakers and planners to promote sustainable urban development and mitigate environmental degradation. Full article
Show Figures

Figure 1

Figure 1
<p>The Aburrá Valley (white line) between the valleys of the Magdalena and Cauca rivers. Data were acquired from ALOS PALSAR Terrain Corrected and data from IGAC.</p>
Full article ">Figure 2
<p>Region of interest (yellow bounding box) selected from the interior of the Aburrá Valley (red line) and the municipalities that are part of it (green lines). Data were acquired from IGAC.</p>
Full article ">Figure 3
<p>Proposed methodology for the multitemporal analysis <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>M</mi> <msub> <mi>A</mi> <mn>1</mn> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Proposed methodology for kurtosis multitemporal analysis, <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>M</mi> <msub> <mi>A</mi> <mn>2</mn> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>Proposed methodology for analysis of zonal land cover changes.</p>
Full article ">Figure 6
<p>Samples of resulting images of the multitemporal analysis methodology proposed in <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>M</mi> <msub> <mi>A</mi> <mn>1</mn> </msub> </mrow> </semantics></math> (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>M</mi> <msub> <mi>F</mi> <mrow> <mi>M</mi> <mi>d</mi> <mi>n</mi> </mrow> </msub> </mrow> </semantics></math>, (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>M</mi> <msub> <mi>F</mi> <mi>σ</mi> </msub> </mrow> </semantics></math>, (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>M</mi> <msub> <mi>F</mi> <mi>M</mi> </msub> </mrow> </semantics></math>, (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>M</mi> <msub> <mi>F</mi> <mrow> <mi>C</mi> <mi>V</mi> </mrow> </msub> </mrow> </semantics></math>, (<b>e</b>) <math display="inline"><semantics> <mrow> <mi>M</mi> <msub> <mi>F</mi> <mi>C</mi> </msub> </mrow> </semantics></math> for the year 2018, and (<b>f</b>) <math display="inline"><semantics> <mrow> <mi>M</mi> <msub> <mi>F</mi> <mi>K</mi> </msub> </mrow> </semantics></math> of <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>M</mi> <msub> <mi>A</mi> <mn>2</mn> </msub> </mrow> </semantics></math> for 2017–2014. Scale, coordinate frame (grid), and north correspond to the region described in the Study area section.</p>
Full article ">Figure 7
<p>Areas of analysis of the results by the SMA1 methodological route and kurtosis. (<b>A</b>). Central Park in Bello (<b>B</b>). Parques del Río Medellín (<b>C</b>). Arkadia Shopping center; (<b>D</b>). Peldar Plant (<b>E</b>). La García water supply reservoir (<b>F</b>). Conasfaltos dam (<b>G</b>). La Ayurá stream basin in Envigado (<b>H</b>). Central Park in Bello (<b>I</b>). Avenida Regional Norte (<b>J</b>). Vía Distribuidora Sur.</p>
Full article ">Figure 8
<p>Side-by-side comparison of the Aburrá Valley. (<b>a</b>) Division into 9 geographical zones. (<b>b</b>) The corresponding correlation coefficients for 5 different land cover change types of the 9 zones.</p>
Full article ">Figure 9
<p>Color maps for every change class in the Aburrá Valley’s nine geographical zones.</p>
Full article ">
22 pages, 2496 KiB  
Article
Positioning Technology Without Ground Control Points for Spaceborne Synthetic Aperture Radar Images Using Rational Polynomial Coefficient Model Considering Atmospheric Delay
by Doudou Hu, Chunquan Cheng, Shucheng Yang and Chengxi Hu
Appl. Sci. 2025, 15(3), 1615; https://doi.org/10.3390/app15031615 - 5 Feb 2025
Viewed by 407
Abstract
This study addresses the issue of atmospheric delay correction for the rational polynomial coefficient (RPC) model associated with spaceborne synthetic aperture radar (SAR) imagery under conditions lacking ephemeris data, proposing a novel approach to enhance the geometric positioning accuracy of RPC models. A [...] Read more.
This study addresses the issue of atmospheric delay correction for the rational polynomial coefficient (RPC) model associated with spaceborne synthetic aperture radar (SAR) imagery under conditions lacking ephemeris data, proposing a novel approach to enhance the geometric positioning accuracy of RPC models. A satellite position inversion method based on the vector-autonomous intersection technique was developed, incorporating ionospheric delay and neutral atmospheric delay models to derive atmospheric delay errors. Additionally, an RPC model reconstruction approach, which integrates atmospheric correction, is proposed. Validation experiments using GF-3 satellite imagery demonstrated that the atmospheric delay values obtained by this method differed by only 0.0001 m from those derived using the traditional ephemeris-based approach, a negligible difference. The method also exhibited high robustness in long-strip imagery. The reconstructed RPC parameters improved image-space accuracy by 18–44% and object-space accuracy by 19–32%. The results indicate that this approach can fully replace traditional ephemeris-based methods for atmospheric delay extraction under ephemeris-free conditions, significantly enhancing the geometric positioning accuracy of SAR imagery RPC models, with substantial application value and development potential. Full article
Show Figures

Figure 1

Figure 1
<p>Radar LOS vector inversion.</p>
Full article ">Figure 2
<p>Satellite position inversion.</p>
Full article ">Figure 3
<p>Ionospheric single-layer model.</p>
Full article ">Figure 4
<p>GF-3 data. (<b>a</b>) area1; (<b>b</b>) area2. The red box indicates the data of rail lift and the blue box indicates the data of rail descent.</p>
Full article ">Figure 5
<p>Satellite position inversion accuracy. (<b>a</b>) Satellite position error; (<b>b</b>) slant-range error.</p>
Full article ">Figure 6
<p>Electron density and specific humidity in the zenith direction. (<b>a</b>) Electron density in the zenith direction; (<b>b</b>) specific humidity in the zenith direction.</p>
Full article ">Figure 7
<p>Atmospheric delamination delay. (<b>a</b>) gf1 ionospheric delay; (<b>b</b>) gf7 ionospheric delay; (<b>c</b>) gf1 neutral atmospheric delay; (<b>d</b>) gf7 neutral atmospheric delay.</p>
Full article ">Figure 8
<p>Atmospheric delay.</p>
Full article ">Figure 9
<p>SAR image positioning accuracy. (<b>a</b>) Image-space accuracy; (<b>b</b>) object-space accuracy.</p>
Full article ">
19 pages, 5660 KiB  
Article
Monitoring of Cropland Non-Agriculturalization Based on Google Earth Engine and Multi-Source Data
by Liuming Yang, Qian Sun, Rong Gui and Jun Hu
Appl. Sci. 2025, 15(3), 1474; https://doi.org/10.3390/app15031474 - 31 Jan 2025
Viewed by 632
Abstract
Cropland is fundamental to food security, and monitoring cropland non-agriculturalization through satellite enforcement can effectively manage and protect cropland. However, existing research primarily focuses on optical imagery, and there are problems such as low data processing efficiency and long updating cycles, which make [...] Read more.
Cropland is fundamental to food security, and monitoring cropland non-agriculturalization through satellite enforcement can effectively manage and protect cropland. However, existing research primarily focuses on optical imagery, and there are problems such as low data processing efficiency and long updating cycles, which make it difficult to meet the needs of large-scale rapid monitoring. To comprehensively and accurately obtain cropland change information, this paper proposes a method based on the Google Earth Engine (GEE) cloud platform, combining optical imagery and synthetic aperture radar (SAR) data for quick and accurate detection of cropland non-agriculturalization. The method uses existing land-use/land cover (LULC) products to quickly update cropland mapping, employs change vector analysis (CVA) for detecting non-agricultural changes in cropland, and introduces vegetation indices to remove pseudo-changes. Using Shanwei City, Guangdong Province, as a case study, the results show that (1) the cropland map generated in this study aligns well with the actual distribution of cropland, achieving an accuracy of 90.8%; (2) compared to using optical imagery alone, the combined optical and SAR data improves monitoring accuracy by 22.7%, with an overall accuracy of 73.65%; (3) in the past five years, cropland changes in Shanwei followed a pattern of an initial increase followed by a decrease. The research in this paper can provide technical reference for the rapid monitoring of cropland non-agriculturalization on a large scale, so as to promote cropland protection and rational utilization of cropland. Full article
Show Figures

Figure 1

Figure 1
<p>Overview map of the study area: (<b>a</b>) geographic location of the study area; (<b>b</b>) distribution of cultivated land in the study area (the total land area is 211.558 square kilometers, and the cropland area is 2571.25 hectares); (<b>c</b>) elevation information of the study area.</p>
Full article ">Figure 2
<p>Flowchart of the method.</p>
Full article ">Figure 3
<p>Screening high-quality sample points.</p>
Full article ">Figure 4
<p>Examples of typical change patches: (<b>a</b>) optical images at time T1; (<b>b</b>) optical images at time T2; (<b>c</b>) SAR images at time T1; (<b>d</b>) SAR images at time T2. The blue boxes in the images highlight the regions where changes occurred between the two time points. The figure illustrates changes observed in the study area over time using both optical and SAR data.</p>
Full article ">Figure 5
<p>Removal of fake changes.</p>
Full article ">Figure 6
<p>Distribution of sample.</p>
Full article ">Figure 7
<p>Distribution of cropland in Shanwei.</p>
Full article ">Figure 8
<p>Comparison of farmland extraction results: (<b>a</b>) optical images; (<b>b</b>) the result of ESA; (<b>c</b>) the result of ESRI; (<b>d</b>) the result of DW; (<b>e</b>) the result of this paper.</p>
Full article ">Figure 9
<p>Comparison of cropland extraction results: (<b>a</b>) distribution of changes in 2019; (<b>b</b>) distribution of changes in 2020; (<b>c</b>) distribution of changes in 2021; (<b>d</b>) distribution of changes in 2022; (<b>e</b>) distribution of changes in 2023.</p>
Full article ">Figure 10
<p>Spatial distribution of results on DEM data.</p>
Full article ">Figure 11
<p>Number of annual change spots in Shanwei.</p>
Full article ">Figure 12
<p>Comparison of only optical and SAR collaborative optical experimental results.</p>
Full article ">Figure 13
<p>Legend for pseudo-changes due to bare soil: (<b>a</b>) optical images at T1; (<b>b</b>) optical images at T2; (<b>c</b>) SAR images at T1; (<b>d</b>) SAR images at T2.</p>
Full article ">Figure 14
<p>Legend for pseudo-changes due to water accumulation: (<b>a</b>) optical images at T1; (<b>b</b>) optical images at T2; (<b>c</b>) SAR images at T1; (<b>d</b>) SAR images at T2.</p>
Full article ">Figure 15
<p>Legend for pseudo-changes due to crop growth.</p>
Full article ">
13 pages, 9419 KiB  
Article
Development of Deployable Reflector Antenna for the SAR-Satellite, Part 3: Environmental Test of Structural-Thermal Model
by Hyun-Guk Kim, Dong-Geon Kim, Ryoon-Ho Do, Min-Ju Kwak, Kyung-Rae Koo and Youngjoon Yu
Appl. Sci. 2025, 15(3), 1436; https://doi.org/10.3390/app15031436 - 30 Jan 2025
Viewed by 531
Abstract
The concept of synthetic aperture radar (SAR) has the advantage of being able to obtain high-quality images even when the target area is at night or covered with obstacles such as clouds or fog. These imaging capabilities have led to a rapid increase [...] Read more.
The concept of synthetic aperture radar (SAR) has the advantage of being able to obtain high-quality images even when the target area is at night or covered with obstacles such as clouds or fog. These imaging capabilities have led to a rapid increase in demand for space SAR imagery across a variety of sectors, including government, military, and commercial sectors. The SAR-based deployable reflector antenna was developed in this series of paper. The satellite performance is influenced by the aperture size of an antenna. To improve the image acquisition performance, the SAR antenna has the configuration of several foldable CFRP reflectors. In this paper, the experimental investigation of the Structural-thermal model deployable reflector antenna is performed. During the launch condition, the satellite and payload are subjected to the dynamic load. In the STM phase, the acoustic test was conducted to evaluate the structural stability of the deployable reflector antenna within the acoustic environment. The sinusoidal vibration test was implemented to investigate the fundamental frequency for inplane/normal directions and evaluate the structural stability of reflector antenna. By using experimental data obtained from the thermal-balance test, the well-correlated thermal analysis model was established to execute the orbital thermal analysis. The experimental results of the environmental test in STM phase show that the deployable reflector antenna has structural stability for the structural/thermal environments. The configuration of the deployable reflector antenna determined in STM phase can be applied to the qualification model. Full article
(This article belongs to the Section Aerospace Science and Engineering)
Show Figures

Figure 1

Figure 1
<p>General flow chart for the development of the satellite/payload.</p>
Full article ">Figure 2
<p>Conceptual diagram of the deployable reflector antenna in STM phase.</p>
Full article ">Figure 3
<p>Concept of sinusoidal vibration test; the test configuration with vibration zig for (<b>a</b>) x, (<b>b</b>) y, and (<b>c</b>) z-axes and (<b>d</b>) test procedure.</p>
Full article ">Figure 4
<p>Modal analysis of the deployable reflector antenna under the stowed condition; (<b>a</b>) analysis configuration; analysis results of mode shapes, frequencies, and effective modal mass along (<b>b</b>) x-, (<b>c</b>) y-, and (<b>d</b>) z-axes, respectively.</p>
Full article ">Figure 5
<p>Sinusoidal vibration test; (<b>a</b>) test setup for the inplane direction; (<b>b</b>) required environmental test specification; the test specifications for (<b>c</b>) inplane and (<b>d</b>) out-of-plane directions; the result of the low-level sine sweep (LLSS) for (<b>e</b>) inplane and (<b>f</b>) out-of-plane directions; The frequency and amplitude at the peak frequency of dashed circle in (<b>e</b>,<b>f</b>) are used for the reference of the modal survey.</p>
Full article ">Figure 6
<p>Concept of the acoustic load test; (<b>a</b>) Test configuration with the acoustic zig; (<b>b</b>) Test procedure.</p>
Full article ">Figure 7
<p>Acoustic load test; (<b>a</b>) the required environmental specification and the measured sound pressure level; (<b>b</b>) the result of the low-level acoustic random test before and after acoustic load test of the full level; There is no tolerance guideline in the high frequency range for the center frequency of 4 k, 8 kHz (black dashed circle in subfigure (<b>b</b>)); (<b>c</b>) the result of the low level acoustic random test before/after the acoustic load test; The frequency and amplitude at the 1st peak frequency is used for the reference of the modal survey (black dashed circle in (<b>c</b>)).</p>
Full article ">Figure 8
<p>Concept of the thermal balance test; (<b>a</b>) Test configuration with thermal vacuum chamber; (<b>b</b>) test procedure.</p>
Full article ">Figure 9
<p>Result of the thermal balance test; (<b>a</b>) the test configuration of the thermal balance test; (<b>b</b>) thermal mathematical model (TMM) with three patch heaters; the time history of (<b>c</b>) vacuum pressure and (<b>d</b>) shroud temperature within the duration of test consisting of I. the evacuation and transient phase into the cold condition II. the cold balance condition; III. the break time without test and transient phase into the hot condition, IV. the hot balance condition and V. the transient phase into the ambient condition.</p>
Full article ">
24 pages, 7131 KiB  
Article
Soil Moisture Retrieval in the Northeast China Plain’s Agricultural Fields Using Single-Temporal L-Band SAR and the Coupled MWCM-Oh Model
by Zhe Dong, Maofang Gao and Arnon Karnieli
Remote Sens. 2025, 17(3), 478; https://doi.org/10.3390/rs17030478 - 30 Jan 2025
Viewed by 443
Abstract
Timely access to soil moisture distribution is critical for agricultural production. As an in-orbit L-band synthetic aperture radar (SAR), SAOCOM offers high penetration and full polarization, making it suitable for agricultural soil moisture estimation. In this study, based on the single-temporal coupled water [...] Read more.
Timely access to soil moisture distribution is critical for agricultural production. As an in-orbit L-band synthetic aperture radar (SAR), SAOCOM offers high penetration and full polarization, making it suitable for agricultural soil moisture estimation. In this study, based on the single-temporal coupled water cloud model (WCM) and Oh model, we first modified the WCM (MWCM) to incorporate bare soil effects on backscattering using SAR data, enhancing the scattering representation during crop growth. Additionally, the Oh model was revised to enable retrieval of both the surface layer (0–5 cm) and underlying layer (5–10 cm) soil moisture. SAOCOM data from 19 June 2022, and 23 June 2023 in Bei’an City, China, along with Sentinel-2 imagery from the same dates, were used to validate the coupled MWCM-Oh model individually. The enhanced vegetation index (EVI), normalized difference vegetation index (NDVI), and leaf area index (LAI), together with the radar vegetation index (RVI) served as vegetation descriptions. Results showed that surface soil moisture estimates were more accurate than those for the underlying layer. LAI performed best for surface moisture (RMSE = 0.045), closely followed by RVI (RMSE = 0.053). For underlying layer soil moisture, RVI provided the most accurate retrieval (RMSE = 0.038), while LAI, EVI, and NDVI tended to overestimate. Overall, LAI and RVI effectively capture surface soil moisture, and RVI is particularly suitable for underlying layers, enabling more comprehensive monitoring. Full article
Show Figures

Figure 1

Figure 1
<p>Location of the study area: (<b>a</b>) China; (<b>b</b>) land use of Heilongjiang Province; (<b>c</b>) distribution of points in the study area (2023).</p>
Full article ">Figure 2
<p>Scattering mechanism of vegetation pre-growth on the pixel scale; θ represents the incidence angle, while the red arrow denotes the incident radar wave. (<b>a</b>) Backscattering from the vegetation canopy (green arrow); (<b>b</b>) backscattering from the soil through the vegetation (light green arrow); and (<b>c</b>) backscattering from bare soil (brown arrow).</p>
Full article ">Figure 3
<p>Schematic illustration of the soil moisture retrieval process. CAL and VAL in Part C represent calibration and validation, respectively. Steps indicated by circled numbers represent key parameters: 1—Vegetation water content, 2—Surface roughness, 3—Dielectric constant, etc. The direction of the arrows indicates whether the parameter is an input (incoming arrows) or an output (outgoing arrows) at each step.</p>
Full article ">Figure 4
<p>Spatial search example using a 2 km radius search window. The elements d(u, i) represents the Euclidean distance between validation points and modeling points.</p>
Full article ">Figure 5
<p>The relationship between measured VWC and vegetation indices, ** represents a highly significant correlation, while the dotted line represents the correlation function between measured VWC and vegetation indices.</p>
Full article ">Figure 6
<p>Spatial distribution of VWC for different vegetation index retrievals.</p>
Full article ">Figure 7
<p>Frequency distribution of inverse vegetation water content for each index.</p>
Full article ">Figure 8
<p>Calculation of root-mean-square height of (<b>a</b>) surface roughness (s) and (<b>b</b>) underlying “roughness” (αs) for different indices. The red line indicates the median value, while the blue square represents the mean value.</p>
Full article ">Figure 9
<p>Spatial distributions and frequency comparisons of surface and underlying soil moisture retrieved with the MWCM-Oh model using various vegetation indices. (<b>a</b>), (<b>d</b>), (<b>g</b>), and (<b>j</b>) show surface soil moisture derived from EVI, NDVI, LAI, and RVI, respectively. (<b>b</b>), (<b>e</b>), (<b>h</b>), and (<b>k</b>) present the corresponding underlying soil moisture, while (<b>c</b>), (<b>f</b>), (<b>i</b>), and (<b>l</b>) display the frequency distributions comparing the two retrievals for each index.</p>
Full article ">Figure 10
<p>Scatter plot comparing predicted and measured soil moisture at two depths (0–5 cm and 5–10 cm) for different vegetation indices as model input: (<b>a</b>) EVI, (<b>b</b>) NDVI, (<b>c</b>) LAI, and (<b>d</b>) RVI..</p>
Full article ">Figure 11
<p>Taylor diagram comparing model accuracy using different vegetation indices as inputs. Symbols represent indices (circle: EVI, triangle: NDVI, square: LAI, hexagon: RVI), with green for surface soil moisture (0–5 cm) and red for underlying soil moisture (5–10 cm). Closer symbols to the black curve and origin indicate better performance.</p>
Full article ">Figure 12
<p>Distribution of α; α represents the ratio of underlying roughness to surface roughness at the same point. The red dotted line at α = 1.0 represents the theoretical maximum.</p>
Full article ">
29 pages, 21542 KiB  
Article
Study of Hydrologic Connectivity and Tidal Influence on Water Flow Within Louisiana Coastal Wetlands Using Rapid-Repeat Interferometric Synthetic Aperture Radar
by Bhuvan K. Varugu, Cathleen E. Jones, Talib Oliver-Cabrera, Marc Simard and Daniel J. Jensen
Remote Sens. 2025, 17(3), 459; https://doi.org/10.3390/rs17030459 - 29 Jan 2025
Viewed by 575
Abstract
The exchange of water, sediment, and nutrients in wetlands occurs through a complex network of channels and overbank flow. Although optical sensors can map channels at high resolution, they fail to identify narrow intermittent channels colonized by vegetation. Here we demonstrate an innovative [...] Read more.
The exchange of water, sediment, and nutrients in wetlands occurs through a complex network of channels and overbank flow. Although optical sensors can map channels at high resolution, they fail to identify narrow intermittent channels colonized by vegetation. Here we demonstrate an innovative application of rapid-repeat interferometric synthetic aperture radar (InSAR) to study hydrologic connectivity and tidal influences in Louisiana’s coastal wetlands, which can provide valuable insights into water flow dynamics, particularly in vegetation-covered and narrow channels where traditional optical methods struggle. Data used were from the airborne UAVSAR L-band sensor acquired for the Delta-X mission. We applied interferometric techniques to rapid-repeat (~30 min) SAR imagery of the southern Atchafalaya basin acquired during two flights encompassing rising-to-high tides and ebbing-to-low tides. InSAR coherence is used to identify and differentiate permanent open water channels from intermittent channels in which flow occurs underneath the vegetation canopy. The channel networks at rising and ebbing tides show significant differences in the extent of flow, with vegetation-filled small channels more clearly identified at rising-to-high tide. The InSAR phase change is used to identify locations on channel banks where overbank flow occurs, which is a critical component for modeling wetland hydrodynamics. This is the first study to use rapid-repeat InSAR to monitor tidal impacts on water flow dynamics in wetlands. The results show that the InSAR method outperforms traditional optical remote sensing methods in monitoring water flow in vegetation-covered wetlands, providing high-resolution data to support hydrodynamic models and critical support for wetland protection and management. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Study area showing the UAVSAR image footprint (blue rectangle) and other areas discussed in the text. (<b>b</b>) Water level at the Amerada Pass and Berwick tide gauges during the two days of the UAVSAR acquisitions. (<b>c</b>,<b>d</b>) Plots showing the tidal variation during the ebbing-to-low tide (<b>c</b>) and rising-to-high tide (<b>d</b>) UAVSAR flights. The times of the UAVSAR acquisitions are indicated with vertical lines.</p>
Full article ">Figure 2
<p>Mean coherence image of NN and NN1 and 2 interferograms for the Wax Lake Delta area during rising tide (<b>a</b>,<b>b</b>) and ebbing tide (<b>c</b>,<b>d</b>) (Geographic location and extent of the area are shown with a purple dotted rectangle in <a href="#remotesensing-17-00459-f001" class="html-fig">Figure 1</a>. See <a href="#app1-remotesensing-17-00459" class="html-app">Supplement Figure S1</a> for full frame version). (<b>e</b>) Illustration showing the water level in a channel containing vegetation and SAR backscatter from the channel surface depending on the presence of emergent vegetation, which varies across the tidal cycle. (<b>f</b>) Variation in coherence with time over pixels identified as open water, wetland, and small/intermittent channels (locations were identified using high-resolution NAIP imagery (<a href="#app1-remotesensing-17-00459" class="html-app">Supplement Figure S4</a>)). Orange line in (<b>d</b>) shows the island used to identify representative locations for (<b>f</b>).</p>
Full article ">Figure 3
<p>Histograms of mean coherence of (<b>a</b>) NN and (<b>b</b>) NN1 and 2 rising-tide interferograms. (<b>c</b>) Flowchart of the steps to derive the channel maps from coherence showing the thresholds used. Pixels below the <math display="inline"><semantics> <mrow> <msub> <mi>γ</mi> <mrow> <mi>m</mi> <mi>e</mi> <mi>a</mi> <mi>n</mi> <mo>_</mo> <mi>N</mi> <mi>N</mi> </mrow> </msub> </mrow> </semantics></math> threshold are identified as open water channels. Pixels below the <math display="inline"><semantics> <mrow> <msub> <mi>γ</mi> <mrow> <mi>m</mi> <mi>e</mi> <mi>a</mi> <mi>n</mi> <mo>_</mo> <mi>N</mi> <mi>N</mi> <mn>12</mn> </mrow> </msub> </mrow> </semantics></math> threshold are identified as intermittent channels only if they are not identified as open water pixels.</p>
Full article ">Figure 4
<p>(<b>a</b>) Connected components grown over channel mask. (<b>b</b>) Connected components after morphological erosion operation. (<b>c</b>) Channel bank pixels obtained after subtracting (<b>b</b>) from (<b>a</b>). (<b>d</b>) Flowchart of the steps to derive overbank flow from channel mask and phase timeseries. Geographic location and extent of the area in are shown with an orange dotted rectangle in <a href="#remotesensing-17-00459-f001" class="html-fig">Figure 1</a>.</p>
Full article ">Figure 5
<p>(<b>a</b>) Channel mask using mean coherence of rising-tide NN interferograms; (<b>b</b>) channel mask using mean coherence of rising-tide NN1 and 2 interferograms; (<b>c</b>) combined mask retaining channels from (<b>a</b>) as open water channels and channels only in (<b>b</b>) as intermittent channels. The wetland category (green) contains pixels that maintained high average coherence in both NN and NN1 and 2 interferograms. (<b>d</b>) InSAR phase velocity from rising-tide data showing areas of water-level change. The warmer colors (yellow-red) indicate a rise in water level and the cooler colors (cyan-blue) indicate a fall in water level.</p>
Full article ">Figure 6
<p>Comparison of optical (<b>a</b>) AVIRIS-NG and (<b>b</b>) NAIP water channel masks with (<b>c</b>) UAVSAR mask. Classification accuracy indicating the location of water class FPs and FNs in the UAVSAR mask compared to (<b>d</b>) AVIRIS-NG and (<b>e</b>) NAIP masks. In d and e, white color indicates pixels’ class matches between both optical and SAR masks (see <a href="#app1-remotesensing-17-00459" class="html-app">Supplement Figure S2</a> for full frame version).</p>
Full article ">Figure 7
<p>(<b>a</b>) InSAR phase velocity (<math display="inline"><semantics> <mrow> <msub> <mi>ϕ</mi> <mrow> <mi>v</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </mrow> </semantics></math>) for the rising-tide data set containing signals from both water-level change and tropospheric water vapor delay, the latter particularly evident in the far range (right-hand side); (<b>b</b>) illustrated procedure to identify noisy CCs shown for a single CC; (<b>c</b>–<b>e</b>) procedure applied to three different areas indicated with black rectangles in (<b>a</b>): (<b>c</b>) InSAR phase velocity; (<b>d</b>) phase velocity averaged over the interior of each CC; (<b>e</b>) noisy CCs removed after comparison of average CC phase velocity to the average phase velocity on the CC’s bank. Dotted ellipses focus on the CCs illustrated.</p>
Full article ">Figure 8
<p>(<b>a</b>) SAR channel mask derived from rising-tide InSAR data. (<b>b</b>) Equivalent channel network observed with SAR using the ebbing-tide InSAR data. (<b>c</b>) Focus on the rectangle in (<b>a</b>) for the rising-tide channel network and (<b>d</b>) ebbing-tide channel network.</p>
Full article ">Figure 9
<p>(<b>a</b>) SAR channel mask and locations of islands (white polygons) showing the extent of intermittent channels identified by (<b>b</b>) SAR, (<b>c</b>) AVIRIS-NG, and (<b>d</b>) NAIP-derived channel masks. White polygon in (<b>a</b>) show the location of islands zoomed in (<b>b</b>–<b>d</b>).</p>
Full article ">Figure 10
<p>Water-level increase and decrease identified by InSAR phase change on the banks of water channels as a function of time during (<b>a</b>) rising tide and (<b>b</b>) ebbing tide. Geographic location and extent of the area are shown with an orange dotted rectangle in <a href="#remotesensing-17-00459-f001" class="html-fig">Figure 1</a>.</p>
Full article ">Figure 11
<p>UAVSAR phase change observed in the interior of the wetland during (<b>a</b>,<b>d</b>) rising tide and (<b>b</b>,<b>e</b>) ebbing tide. The difference between where the water goes during the different stages of the tides is clear. Overbank flow is more extensive at rising/high tide, but small channels still transport water within the island interior at ebbing/low tide. (<b>c</b>,<b>f</b>) Optical images to show the landscape (source: Google Earth [<a href="#B86-remotesensing-17-00459" class="html-bibr">86</a>,<a href="#B87-remotesensing-17-00459" class="html-bibr">87</a>]). Service Layer Credits: © 2024 Airbus. Geographic location of the areas in a&amp;d are shown with red circle and ellipse in <a href="#remotesensing-17-00459-f001" class="html-fig">Figure 1</a>.</p>
Full article ">Figure 12
<p>(<b>a</b>,<b>b</b>) UAVSAR phase velocity showing water-level change rate for rising tide and ebbing tide; (<b>c</b>,<b>d</b>) channels derived using a threshold on phase; (<b>e</b>,<b>f</b>) channels derived using a threshold on coherence; and (<b>g</b>,<b>h</b>) channels derived using both phase and coherence.</p>
Full article ">Figure 13
<p>The influence of acquisition time demonstrated on a continuously water-logged area in the interior of the wetland using close-ups of (<b>a</b>) SAR; (<b>b</b>) AVIRIS-NG; (<b>c</b>) NAIP-derived channel masks and (<b>d</b>) the aerial image of the corresponding area [<a href="#B96-remotesensing-17-00459" class="html-bibr">96</a>]. Geographic location and extent of the area are shown with a white polygon in <a href="#remotesensing-17-00459-f009" class="html-fig">Figure 9</a>.</p>
Full article ">
19 pages, 5807 KiB  
Article
BurgsVO: Burgs-Associated Vertex Offset Encoding Scheme for Detecting Rotated Ships in SAR Images
by Mingjin Zhang, Yaofei Li, Jie Guo, Yunsong Li and Xinbo Gao
Remote Sens. 2025, 17(3), 388; https://doi.org/10.3390/rs17030388 - 23 Jan 2025
Viewed by 423
Abstract
Synthetic Aperture Radar (SAR) is a crucial remote sensing technology with significant advantages. Ship detection in SAR imagery has garnered significant attention. However, existing ship detection methods often overlook feature extraction, and the unique imaging mechanisms of SAR images hinder the direct application [...] Read more.
Synthetic Aperture Radar (SAR) is a crucial remote sensing technology with significant advantages. Ship detection in SAR imagery has garnered significant attention. However, existing ship detection methods often overlook feature extraction, and the unique imaging mechanisms of SAR images hinder the direct application of conventional natural image feature extraction techniques. Moreover, oriented bounding box-based detection methods often prioritize accuracy excessively, leading to increased parameters and computational costs, which in turn elevate computational load and model complexity. To address these issues, we propose a novel two-stage detector, Burgs-rooted vertex offset encoding scheme (BurgsVO), for detecting rotated ships in SAR images. BurgsVO consists of two key modules: the Burgs equation heuristics module, which facilitates feature extraction, and the average diagonal vertex offset (ADVO) encoding scheme, which significantly reduces computational costs. Specifically, the Burgs equation module integrates temporal information with spatial data for effective feature aggregation, establishing a strong foundation for subsequent object detection. The ADVO encoding scheme reduces parameters through anchor transformation, leveraging geometric similarities between quadrilaterals and triangles to further reduce computational costs. Experimental results on the RSSDD and RSDD benchmarks demonstrate that the proposed BurgsVO outperforms the state-of-the-art detectors in both accuracy and efficiency. Full article
Show Figures

Figure 1

Figure 1
<p>The overall framework of our method.</p>
Full article ">Figure 2
<p>The structure of the Burgess equation heuristic module.</p>
Full article ">Figure 3
<p>Illustration of an OBB represented by an ADVO.</p>
Full article ">Figure 4
<p>Decoding regression diagram of ADVO.</p>
Full article ">Figure 5
<p>Visualization of the detection results of different methods on RSSDD. Red rectangles indicate the actual ship targets. Green and purple rectangles represent the detection results of five comparative methods and our method, respectively.</p>
Full article ">Figure 6
<p>Visualization of the detection results of different methods on RSDD. Red rectangles indicate the actual ship targets. Green and blue rectangles represent the detection results of five comparative methods and our method, respectively.</p>
Full article ">Figure 7
<p>Algorithm Performance under ship size variations. Red rectangles indicate the actual ship targets, and purple rectangles represent the detection results of our method.</p>
Full article ">Figure 8
<p>Speed vs. accuracy on the RSSDD test set.</p>
Full article ">
Back to TopTop