Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,088)

Search Parameters:
Keywords = radar network

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 5213 KiB  
Article
Radar Moving Target Detection Based on Small-Sample Transfer Learning and Attention Mechanism
by Jiang Zhu, Cai Wen, Chongdi Duan, Weiwei Wang and Xiaochao Yang
Remote Sens. 2024, 16(22), 4325; https://doi.org/10.3390/rs16224325 (registering DOI) - 20 Nov 2024
Viewed by 199
Abstract
Moving target detection is one of the most important tasks of radar systems. The clutter echo received by radar is usually strong and heterogeneous when the radar works in a complex terrain environment, resulting in performance degradation in moving target detection. Utilizing prior [...] Read more.
Moving target detection is one of the most important tasks of radar systems. The clutter echo received by radar is usually strong and heterogeneous when the radar works in a complex terrain environment, resulting in performance degradation in moving target detection. Utilizing prior knowledge of the clutter distribution in the space–time domain, this paper proposes a novel moving target detection network based on small-sample transfer learning and attention mechanism. The proposed network first utilizes offline data to train the feature extraction network and reduce the online training time. Meanwhile, the attention mechanism used for feature extraction is applied in the beam-Doppler domain to improve classification accuracy of targets. Then, a small amount of real-time data are applied to a small-sample transfer network to fine-tune the feature extraction network. Finally, the target detection can be realized by the fine-tuned network. Simulation experiments show that the proposed network can eliminate the influence of heterogeneous clutter on moving target detection, and the attention mechanism can improve clutter suppression under a low signal-to-noise ratio regime. The proposed network has a lower computational load compared to conventional neural networks, enabling its use in real-time applications on space-borne/airborne radars. Full article
Show Figures

Figure 1

Figure 1
<p>Radar operational geometry.</p>
Full article ">Figure 2
<p>Small-sample network architecture diagram.</p>
Full article ">Figure 3
<p>Feature extraction network diagram.</p>
Full article ">Figure 4
<p>Channel attention module network model.</p>
Full article ">Figure 5
<p>Spatial attention module network model.</p>
Full article ">Figure 6
<p>Few-shot transfer learning network model.</p>
Full article ">Figure 7
<p>Schematic diagram of data selection and expansion.</p>
Full article ">Figure 8
<p>Prediction results of validation dataset: (<b>a</b>) One-FENet model detection results; (<b>b</b>) Two-FENet model detection results; (<b>c</b>) Three-FENet model detection results.</p>
Full article ">Figure 9
<p>Comparison experiment under low-SCR conditions: (<b>a</b>) Comparison experiment on one dataset; (<b>b</b>) comparison experiment on two dataset; (<b>c</b>) comparison experiment on three dataset.</p>
Full article ">Figure 10
<p>The impact of attention mechanism on classification prediction: (<b>a</b>) Classification prediction probability for data one; (<b>b</b>) classification prediction probability for data two; (<b>c</b>) classification prediction probability for data three.</p>
Full article ">Figure 11
<p>Comparison results with the few-shot network and traditional STAP + CFAR method: (<b>a</b>) Comparison experiment under One-FENet model; (<b>b</b>) comparison experiment under Two-FENet model; (<b>c</b>) comparison experiment under Three-FENet model.</p>
Full article ">Figure 12
<p>Moving target detection results with SSM-Net and STAP+CFAR: (<b>a</b>) Detection results under One-FENet model; (<b>b</b>) detection results for the first category using STAP + CFAR; (<b>c</b>) detection results under Two-FENet model; (<b>d</b>) detection results for the second category using STAP + CFAR; (<b>e</b>) detection results under Three-FENet model; (<b>f</b>) detection results for the third category using STAP + CFAR.</p>
Full article ">
18 pages, 2514 KiB  
Article
Coastal Reclamation Embankment Deformation: Dynamic Monitoring and Future Trend Prediction Using Multi-Temporal InSAR Technology in Funing Bay, China
by Jinhua Huang, Baohang Wang, Xiaohe Cai, Bojie Yan, Guangrong Li, Wenhong Li, Chaoying Zhao, Liye Yang, Shouzhu Zheng and Linjie Cui
Remote Sens. 2024, 16(22), 4320; https://doi.org/10.3390/rs16224320 - 19 Nov 2024
Viewed by 214
Abstract
Reclamation is an effective strategy for alleviating land scarcity in coastal areas, thereby providing additional arable land and opportunities for marine ranching. Monitoring the safety of artificial reclamation embankments is crucial for protecting these reclaimed areas. This study employed synthetic aperture radar interferometry [...] Read more.
Reclamation is an effective strategy for alleviating land scarcity in coastal areas, thereby providing additional arable land and opportunities for marine ranching. Monitoring the safety of artificial reclamation embankments is crucial for protecting these reclaimed areas. This study employed synthetic aperture radar interferometry (InSAR) using 224 Sentinel-1A data, spanning from 9 January 2016 to 8 April 2024, to investigate the deformation characteristics of the coastal reclamation embankment in Funing Bay, China. We optimized the phase-unwrapping network by employing ambiguity-detection and redundant-observation methods to facilitate the multitemporal InSAR phase-unwrapping process. The deformation results indicated that the maximum observed land subsidence rate exceeded 50 mm per year. The Funing Bay embankment exhibited a higher level of internal deformation than areas closer to the sea. Time-series analysis revealed a gradual deceleration in the deformation rate. Furthermore, a geotechnical model was utilized to predict future deformation trends. Understanding the spatial dynamics of deformation characteristics in the Funing Bay reclamation embankment will be beneficial for ensuring the safe operation of future coastal reclamation projects. Full article
17 pages, 5205 KiB  
Article
Temporal Associations Between Polarimetric Updraft Proxies and Signatures of Inflow and Hail in Supercells
by Matthew S. Van Den Broeke and Erik R. Green
Remote Sens. 2024, 16(22), 4314; https://doi.org/10.3390/rs16224314 - 19 Nov 2024
Viewed by 235
Abstract
Recurring polarimetric radar signatures in supercells include deep and persistent differential reflectivity (ZDR) columns, hail inferred in low-level scans, and the ZDR arc signature. Prior investigations of supercell polarimetric signatures reveal positive correlations between the ZDR column depth [...] Read more.
Recurring polarimetric radar signatures in supercells include deep and persistent differential reflectivity (ZDR) columns, hail inferred in low-level scans, and the ZDR arc signature. Prior investigations of supercell polarimetric signatures reveal positive correlations between the ZDR column depth and cross-sectional area and quantitative characteristics of the radar reflectivity field. This study expands upon prior work by examining temporal associations between supercell polarimetric radar signatures, incorporating a dataset of relatively discrete, right-moving supercells from the continental United States observed by the Weather Surveillance Radar 1988-Doppler (WSR-88D) network. Cross-correlation coefficients were calculated between the ZDR column area and depth and the base-scan hail area, ZDR arc area, and mean ZDR arc value. These correlation values were computed with a positive and negative lag time of up to 45 min. Results of the lag correlation analysis are consistent with prior observations indicative of storm cycling, including temporal associations between ZDR columns and inferred hail signatures/ZDR arcs in both tornadic and nontornadic supercells, but were most pronounced in tornadic storms. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>A map showing the approximate starting location of the observation period for each storm in this study. Red triangles indicate tornadic storms, and green circles indicate nontornadic storms. Letter or number next to each storm location corresponds to the storm identifiers in <a href="#remotesensing-16-04314-t001" class="html-table">Table 1</a>.</p>
Full article ">Figure 2
<p>Schematic demonstrating how lag was applied to two metrics being compared for both a positive and negative lag.</p>
Full article ">Figure 3
<p>Box and violin plots containing median values over each storm’s observation period of (<b>a</b>) 35 dBZ storm area, (<b>b</b>) storm-core maximum Z<sub>HH</sub> value (dBZ), and (<b>c</b>) polarimetrically inferred hail area (km<sup>2</sup>). Cyan diamonds indicate distribution means, horizontal black lines indicate distribution medians, edges of boxes indicate the 25th and 75th percentiles of each distribution, whiskers indicate the 10th and 90th percentiles of each distribution, and outliers are indicated by circles. MWU and KS test <span class="html-italic">p</span>-values are indicated at the bottom of each panel.</p>
Full article ">Figure 4
<p>As in <a href="#remotesensing-16-04314-f003" class="html-fig">Figure 3</a>, except for median values of the (<b>a</b>) Z<sub>DR</sub> arc width (km), (<b>b</b>) Z<sub>DR</sub> arc area (km<sup>2</sup>), (<b>c</b>) mean value of Z<sub>DR</sub> in the Z<sub>DR</sub> arc (dB), (<b>d</b>) Z<sub>DR</sub> column area (km<sup>2</sup>), and (<b>e</b>) depth of the Z<sub>DR</sub> column (km).</p>
Full article ">Figure 5
<p>Median correlograms for Z<sub>DR</sub> column area compared to other polarimetric radar metrics (blue lines): inferred hail areal extent at base scan (km<sup>2</sup>; row 1), the mean area of the Z<sub>DR</sub> arc (km<sup>2</sup>; row 2), and the mean value of Z<sub>DR</sub> within the Z<sub>DR</sub> arc (dB; row 3). The left column shows median correlograms for the 8 tornadic storms, and the right column shows median correlograms for the 4 nontornadic storms. The X-axis indicates lag correlation in minutes. The vertical red line indicates the time of maximum correlation magnitude.</p>
Full article ">Figure 6
<p>As in <a href="#remotesensing-16-04314-f005" class="html-fig">Figure 5</a>, except for Z<sub>DR</sub> column depth (km) as an updraft proxy.</p>
Full article ">Figure 7
<p>The hypothesized general relationship between updraft intensity (green dashed line), storm-relative inflow (leading an updraft pulse; blue dashed line), and hail area (lagging an updraft pulse; solid orange line) in supercell storms.</p>
Full article ">Figure 8
<p>Departures from the 2126–2236 UTC mean value for Z<sub>DR</sub> arc magnitude (orange line, dB, departure value multiplied by 5 for scale), Z<sub>DR</sub> column depth (green line, km), and hail area (blue line, km<sup>2</sup>, departure value divided by 10 for scale). Relevant peaks in these variables are indicated by vertical lines.</p>
Full article ">Figure 9
<p>Radar image from KFFC at 2145 UTC on 18 March 2013, when the storm centroid was ~35 km from the radar. (<b>a</b>) shows a 300 m CAPPI of Z<sub>HH</sub>, (<b>b</b>) shows a 300 m CAPPI of Z<sub>DR</sub>, and (<b>c</b>) shows a 5000 m CAPPI of Z<sub>DR</sub>. White annotations in (<b>a</b>,<b>b</b>) indicate radar-inferred hail, and white annotation in (<b>c</b>) indicates the Z<sub>DR</sub> column area at 5 km above radar level.</p>
Full article ">Figure 10
<p>As in <a href="#remotesensing-16-04314-f009" class="html-fig">Figure 9</a>, except at 2159 UTC when the storm centroid was ~27 km from the radar.</p>
Full article ">Figure 11
<p>As in <a href="#remotesensing-16-04314-f009" class="html-fig">Figure 9</a>, except at 2213 UTC when the storm centroid was ~20 km from the radar.</p>
Full article ">
21 pages, 12271 KiB  
Article
Detection of Marine Oil Spill from PlanetScope Images Using CNN and Transformer Models
by Jonggu Kang, Chansu Yang, Jonghyuk Yi and Yangwon Lee
J. Mar. Sci. Eng. 2024, 12(11), 2095; https://doi.org/10.3390/jmse12112095 - 19 Nov 2024
Viewed by 265
Abstract
The contamination of marine ecosystems by oil spills poses a significant threat to the marine environment, necessitating the prompt and effective implementation of measures to mitigate the associated damage. Satellites offer a spatial and temporal advantage over aircraft and unmanned aerial vehicles (UAVs) [...] Read more.
The contamination of marine ecosystems by oil spills poses a significant threat to the marine environment, necessitating the prompt and effective implementation of measures to mitigate the associated damage. Satellites offer a spatial and temporal advantage over aircraft and unmanned aerial vehicles (UAVs) in oil spill detection due to their wide-area monitoring capabilities. While oil spill detection has traditionally relied on synthetic aperture radar (SAR) images, the combined use of optical satellite sensors alongside SAR can significantly enhance monitoring capabilities, providing improved spatial and temporal coverage. The advent of deep learning methodologies, particularly convolutional neural networks (CNNs) and Transformer models, has generated considerable interest in their potential for oil spill detection. In this study, we conducted a comprehensive and objective comparison to evaluate the suitability of CNN and Transformer models for marine oil spill detection. High-resolution optical satellite images were used to optimize DeepLabV3+, a widely utilized CNN model; Swin-UPerNet, a representative Transformer model; and Mask2Former, which employs a Transformer-based architecture for both encoding and decoding. The results of cross-validation demonstrate a mean Intersection over Union (mIoU) of 0.740, 0.840 and 0.804 for all the models, respectively, indicating their potential for detecting oil spills in the ocean. Additionally, we performed a histogram analysis on the predicted oil spill pixels, which allowed us to classify the types of oil. These findings highlight the considerable promise of the Swin Transformer models for oil spill detection in the context of future marine disaster monitoring. Full article
(This article belongs to the Special Issue Remote Sensing Applications in Marine Environmental Monitoring)
Show Figures

Figure 1

Figure 1
<p>Examples of image processing steps: (<b>a</b>) original satellite images, (<b>b</b>) images after gamma correction and histogram adjustment, and (<b>c</b>) labeled images.</p>
Full article ">Figure 2
<p>Flowchart of this study, illustrating the processes of labeling, modeling, optimization, and evaluation using the DeepLabV3+, Swin-UPerNet, and Mask2Former models [<a href="#B23-jmse-12-02095" class="html-bibr">23</a>,<a href="#B24-jmse-12-02095" class="html-bibr">24</a>,<a href="#B25-jmse-12-02095" class="html-bibr">25</a>].</p>
Full article ">Figure 3
<p>Concept of the 5-fold cross-validation in this study.</p>
Full article ">Figure 4
<p>Examples of image data augmentation using the Albumentations library. The example images include random 90-degree rotation, horizontal flip, vertical flip, optical distortion, grid distortion, RGB shift, and random brightness/contrast adjustment.</p>
Full article ">Figure 5
<p>Randomly selected examples from fold 1, including PlanetScope RGB images, segmentation labels, and predictions from DeepLabV3+ (DL), Swin-UPerNet (Swin), and Mask2Former (M2F).</p>
Full article ">Figure 6
<p>Randomly selected examples from fold 2, including PlanetScope RGB images, segmentation labels, and predictions from DeepLabV3+ (DL), Swin-UPerNet (Swin), and Mask2Former (M2F).</p>
Full article ">Figure 7
<p>Randomly selected examples from fold 3, including PlanetScope RGB images, segmentation labels, and predictions from DeepLabV3+ (DL), Swin-UPerNet (Swin), and Mask2Former (M2F).</p>
Full article ">Figure 8
<p>Randomly selected examples from fold 4, including PlanetScope RGB images, segmentation labels, and predictions from DeepLabV3+ (DL), Swin-UPerNet (Swin), and Mask2Former (M2F).</p>
Full article ">Figure 9
<p>Randomly selected examples from fold 5, including PlanetScope RGB images, segmentation labels, and predictions from DeepLabV3+ (DL), Swin-UPerNet (Swin), and Mask2Former (M2F).</p>
Full article ">Figure 10
<p>Thick oil layers with a dark black tone: histogram distribution graph and box plot of oil spill pixels extracted from the labels, DeepLabV3+, Swin-UPerNet, and Mask2Former. The <span class="html-italic">x</span>-axis values represent the digital numbers (DNs) from PlanetScope images. (<b>a</b>) Oil mask, (<b>b</b>) histogram, and (<b>c</b>) box plot.</p>
Full article ">Figure 11
<p>Thin oil layers with a bright silver tone: histogram distribution graph and box plot of oil spill pixels extracted from the labels, DeepLabV3+, Swin-UPerNet, and Mask2Former. The <span class="html-italic">x</span>-axis values represent the digital numbers (DNs) from PlanetScope images. (<b>a</b>) Oil mask, (<b>b</b>) histogram, and (<b>c</b>) box plot.</p>
Full article ">Figure 12
<p>Thin oil layers with a bright rainbow tone: histogram distribution graph and box plot of oil spill pixels extracted from the labels, DeepLabV3+, Swin-UPerNet, and Mask2Former. The <span class="html-italic">x</span>-axis values represent the digital numbers (DNs) from PlanetScope images. (<b>a</b>) Oil mask, (<b>b</b>) histogram, and (<b>c</b>) box plot.</p>
Full article ">
17 pages, 8145 KiB  
Article
Integrated Anti-Aliasing and Fully Shared Convolution for Small-Ship Detection in Synthetic Aperture Radar (SAR) Images
by Manman He, Junya Liu, Zhen Yang and Zhijian Yin
Electronics 2024, 13(22), 4540; https://doi.org/10.3390/electronics13224540 - 19 Nov 2024
Viewed by 231
Abstract
Synthetic Aperture Radar (SAR) imaging plays a vital role in maritime surveillance, yet the detection of small vessels poses a significant challenge when employing conventional Constant False Alarm Rate (CFAR) techniques, primarily due to the limitations in resolution and the presence of clutter. [...] Read more.
Synthetic Aperture Radar (SAR) imaging plays a vital role in maritime surveillance, yet the detection of small vessels poses a significant challenge when employing conventional Constant False Alarm Rate (CFAR) techniques, primarily due to the limitations in resolution and the presence of clutter. Deep learning (DL) offers a promising alternative, yet it still struggles with identifying small targets in complex SAR backgrounds because of feature ambiguity and noise. To address these challenges, our team has developed the AFSC network, which combines anti-aliasing techniques with fully shared convolutional layers to improve the detection of small targets in SAR imagery. The network is composed of three key components: the Backbone Feature Extraction Module (BFEM) for initial feature extraction, the Neck Feature Fusion Module (NFFM) for consolidating features, and the Head Detection Module (HDM) for final object detection. The BFEM serves as the principal feature extraction technique, with a primary emphasis on extracting features of small targets, The NFFM integrates an anti-aliasing element and is designed to accentuate the feature details of diminutive objects throughout the fusion procedure, HDM is the detection head module and adopts a new fully shared convolution strategy to make the model more lightweight. Our approach has shown better performance in terms of speed and accuracy for detecting small targets in SAR imagery, surpassing other leading methods on the SSDD dataset. It attained a mean Average Precision (AP) of 69.3% and a specific AP for small targets (APS) of 66.5%. Furthermore, the network’s robustness was confirmed using the HRSID dataset. Full article
(This article belongs to the Special Issue Advances in AI Technology for Remote Sensing Image Processing)
Show Figures

Figure 1

Figure 1
<p>Architecture of the proposed network AFSC, including BFEM, NFFM, and HDM.</p>
Full article ">Figure 2
<p>Architecture of the proposed sub-module PC.</p>
Full article ">Figure 3
<p>Architecture of the proposed module HDM.</p>
Full article ">Figure 4
<p>Details regarding the dimensions of the bounding boxes within both datasets. The <b>top-left</b> graph of each dataset is the number of data in the training set and how many there are in each category; the <b>top right</b> figure is the size and number of boxes; the <b>lower-left</b> figure depicts the location of the object’s center relative to the entire image; and the <b>lower-right</b> figure shows the object’s height-to-width ratio in comparison to the entire image.</p>
Full article ">Figure 5
<p>The visual results on the SSDD dataset are displayed. Red bounding boxes indicate the actual positions, while blue bounding boxes denote the predicted locations, and yellow bounding boxes indicate the missed detections and the orange box indicates false detection. From the first line to line 9 are Ground Truth, Faster R-CNN, MobileNet, SSD, RetinaNet, YOLOv7, YOLOv8, YOLOv10 and AFSC.</p>
Full article ">Figure 6
<p>The curves of the experimental results, with the x-axis indicating the number of epochs and the y-axis representing the corresponding quantitative results.</p>
Full article ">Figure 7
<p>Based on the actual and predicted categories from the classification model, the confusion matrix organizes the dataset’s records into a matrix format. In this matrix, the rows correspond to the true categories, and the columns correspond to the model’s predicted categories. The above are the confusion matrices for the base model and the AFSC model, respectively.</p>
Full article ">
18 pages, 3920 KiB  
Article
A Multi-Parameter Optimization Method for Electromagnetic Characteristics Fitting Based on Deep Learning
by Jiaxing Hao, Sen Yang and Hongmin Gao
Appl. Sci. 2024, 14(22), 10652; https://doi.org/10.3390/app142210652 - 18 Nov 2024
Viewed by 342
Abstract
Electromagnetic technology is widely applied in numerous fields, and precise electromagnetic characteristic fitting technology has become a crucial part for enhancing system performance and optimizing design. However, it faces challenges such as high computational complexity and the difficulty in balancing the accuracy and [...] Read more.
Electromagnetic technology is widely applied in numerous fields, and precise electromagnetic characteristic fitting technology has become a crucial part for enhancing system performance and optimizing design. However, it faces challenges such as high computational complexity and the difficulty in balancing the accuracy and generalization ability of the model. For example, the Radar Cross Section (RCS) distribution characteristics of a single corner reflector model or Luneberg lens provide a relatively stable RCS value within a certain airspace range, which to some extent reduces the difficulty of radar target detection and fails to truly evaluate the radar performance. This paper aims to propose an innovative multi-parameter optimization method for electromagnetic characteristic fitting based on deep learning. By selecting common targets such as reflectors and Luneberg lens reflectors as optimization variables, a deep neural network model is constructed and trained with a large amount of electromagnetic data to achieve high-precision fitting of the target electromagnetic characteristics. Meanwhile, an advanced genetic optimization algorithm is introduced to optimize the multiple parameters of the model to meet the error index requirements of radar target detection. In this paper, by combining specific optimization variables such as corner reflectors and Luneberg lenses with the deep learning model and genetic algorithm, the deficiencies of traditional methods in handling electromagnetic characteristic fitting are effectively addressed. The experimental results show that the 60° corner reflector successfully realizes the simulation of multiple peak characteristics of the target, and the Luneberg lens reflector achieves the simulation of a relatively small RCS average value with certain fluctuations in a large space range, which strongly proves that this method has significant advantages in improving the fitting accuracy and optimization efficiency, opening up new avenues for research and application in the electromagnetic field. Full article
Show Figures

Figure 1

Figure 1
<p>Structure diagram of electromagnetic characteristic parameter optimization method.</p>
Full article ">Figure 2
<p>Genetic algorithm basic flow diagram.</p>
Full article ">Figure 3
<p>Capsule neuron model.</p>
Full article ">Figure 4
<p>Schematic diagram of the CapsNet structure.</p>
Full article ">Figure 5
<p>The RCS distribution characteristic.</p>
Full article ">Figure 6
<p>Schematic diagram of electromagnetic characteristic parameter optimization scheme.</p>
Full article ">Figure 7
<p>Comparison of RCS calculation results of different algorithms.</p>
Full article ">Figure 8
<p>Objective function optimization results.</p>
Full article ">Figure 9
<p>The RCS distribution characteristics obtained by the algorithm in this paper are compared with the typical aircraft data.</p>
Full article ">Figure 10
<p>Compared results of the fitting results obtained using the proposed algorithm and the aircraft typical data.</p>
Full article ">Figure 11
<p>The fitting results of RCS distribution characteristics of some typical targets. (<b>a</b>) Ship. (<b>b</b>) Armored vehicle.</p>
Full article ">
19 pages, 21578 KiB  
Article
A Gradual Adversarial Training Method for Semantic Segmentation
by Yinkai Zan, Pingping Lu and Tingyu Meng
Remote Sens. 2024, 16(22), 4277; https://doi.org/10.3390/rs16224277 - 16 Nov 2024
Viewed by 474
Abstract
Deep neural networks (DNNs) have achieved great success in various computer vision tasks. However, they are susceptible to artificially designed adversarial perturbations, which limit their deployment in security-critical applications. In this paper, we propose a gradual adversarial training (GAT) method for remote sensing [...] Read more.
Deep neural networks (DNNs) have achieved great success in various computer vision tasks. However, they are susceptible to artificially designed adversarial perturbations, which limit their deployment in security-critical applications. In this paper, we propose a gradual adversarial training (GAT) method for remote sensing image segmentation. Our method incorporates a domain-adaptive mechanism that dynamically modulates input data, effectively reducing adversarial perturbations. GAT not only improves segmentation accuracy on clean images but also significantly enhances robustness against adversarial attacks, all without necessitating changes to the network architecture. The experimental results demonstrate that GAT consistently outperforms conventional standard adversarial training (SAT), showing increased resilience to adversarial attacks of varying intensities on both optical and Synthetic Aperture Radar (SAR) images. Compared to the SAT defense method, GAT achieves a notable defense performance improvement of 1% to 12%. Full article
(This article belongs to the Special Issue SAR-Based Signal Processing and Target Recognition (Second Edition))
Show Figures

Figure 1

Figure 1
<p>Comparison of no defense (ND), active defense (AD), and passive defense (PD). Active defense is robust by adjusting the network, while passive defense is defended by preprocessing operations outside the network.</p>
Full article ">Figure 2
<p>Schematic diagram of the manifold hypothesis. Natural images lie on a low-dimensional manifold, while images with adversarial perturbations added to them lie outside the low-dimensional manifold.</p>
Full article ">Figure 3
<p>GAT training flowchart. The GAT method proposed in this paper can be divided into two modules: intermediate domain data generation and standard DNN training process. The intermediate domain data generation module generates intermediate domain data based on clean images and uses them as input to the latter. The standard DNN training process trains the model based on the input data and provides the former with parameters for the generation of adversarial perturbations.</p>
Full article ">Figure 4
<p>A presentation. of data from San Francisco. (<b>a</b>) Pauli decomposition result and (<b>b</b>) Ground truth. Blue is water, green is vegetation, red is high-density urban, yellow is low-density urban, and purple is development areas, black is unlabeled background.</p>
Full article ">Figure 5
<p>A presentation of data from Vaihingen. The blue box area in the left figure is the selected typical area with 4 different types of ground objects, which are used for analysis in the subsequent presentation of the experimental results. (<b>a</b>) ISPRS-Vaihingen dataset example and (<b>b</b>) Ground truth. Blue is buildings, light blue is low vegetation, green is trees, yellow is cars, red is the background, and white is imperviouos surfaces.</p>
Full article ">Figure 6
<p>Metric curves of segmentation results on the SF-RS2 dataset facing adversarial attacks with different attack intensities. The first row shows the Acc evaluation index curve, and the second row shows the F1 score evaluation index curve. From the first column to the fourth column, the attack algorithms using FGSM, DAG, PGD, and segPGD are shown. The horizontal axis of each graph is the attack intensity, which ranges from 0.00 to 0.01, and the vertical axis is the evaluation index.</p>
Full article ">Figure 7
<p>SF-RS2 dataset segmentation results in the face of FGSM, DAG, PGD, and segPGD attacks. The attack intensity ranges from 0 to 0.01. For each adversarial attack algorithm, the segmentation results of no defense, SAT, and GAT are compared in turn. Taking the yellow circle as an example, the feature type of the area is high-density city, and it can clearly be seen that the segmentation accuracy of GAT is better than that of SAT and no defense. The dotted gray lines correspond to 90% accuracy and the dotted yellow lines correspond to 75% accuracy. When accuracy is reduced to the same level, the GAT method can withstand a stronger attack intensity.</p>
Full article ">Figure 8
<p>Metric curves of segmentation results on the ISPRS-Vaihingen dataset facing adversarial attacks with different attack intensities. The first row shows the Acc evaluation index curve, and the second row shows the F1 score evaluation index curve. From the first column to the fourth column, the attack algorithms using FGSM, DAG, PGD, and segPGD are shown. The horizontal axis of each graph is the attack intensity, which ranges from 0.00 to 0.0157, and the vertical axis is the evaluation index.</p>
Full article ">Figure 9
<p>ISPRS-Vaihingen dataset segmentation results in the face of FGSM, DAG, PGD, and segPGD attacks. The attack intensity ranges from 0 to 0.0196. For each adversarial attack algorithm, the segmentation results of no defense, SAT, and GAT are compared in turn. Taking the red box area as an example, the feature type of this area is building, and it can clearly be seen that the segmentation accuracy of GAT is better than that of SAT and no defense. The dotted gray lines correspond to 50% accuracy. When accuracy is reduced to the same level, the GAT method can withstand a stronger attack intensity.</p>
Full article ">
17 pages, 6219 KiB  
Article
DGGNets: Deep Gradient-Guidance Networks for Speckle Noise Reduction
by Li Wang, Jinkai Li, Yi-Fei Pu, Hao Yin and Paul Liu
Fractal Fract. 2024, 8(11), 666; https://doi.org/10.3390/fractalfract8110666 - 15 Nov 2024
Viewed by 264
Abstract
Speckle noise is a granular interference that degrades image quality in coherent imaging systems, including underwater sonar, Synthetic Aperture Radar (SAR), and medical ultrasound. This study aims to enhance speckle noise reduction through advanced deep learning techniques. We introduce the Deep Gradient-Guidance Network [...] Read more.
Speckle noise is a granular interference that degrades image quality in coherent imaging systems, including underwater sonar, Synthetic Aperture Radar (SAR), and medical ultrasound. This study aims to enhance speckle noise reduction through advanced deep learning techniques. We introduce the Deep Gradient-Guidance Network (DGGNet), which features an architecture comprising one encoder and two decoders—one dedicated to image recovery and the other to gradient preservation. Our approach integrates a gradient map and fractional-order total variation into the loss function to guide training. The gradient map provides structural guidance for edge preservation and directs the denoising branch to focus on sharp regions, thereby preventing over-smoothing. The fractional-order total variation mitigates detail ambiguity and excessive smoothing, ensuring rich textures and detailed information are retained. Extensive experiments yield an average Peak Signal-to-Noise Ratio (PSNR) of 31.52 dB and a Structural Similarity Index (SSIM) of 0.863 across various benchmark datasets, including McMaster, Kodak24, BSD68, Set12, and Urban100. DGGNet outperforms existing methods, such as RIDNet, which achieved a PSNR of 31.42 dB and an SSIM of 0.853, thereby establishing new benchmarks in speckle noise reduction. Full article
Show Figures

Figure 1

Figure 1
<p>System architecture of a speckle noise reduction system.</p>
Full article ">Figure 2
<p>The network structure of the proposed DGGNet. The DGGNet consists of one encoder and two decoders (one decoder works for the denoising branch, and the other works for the gradient branch). The gradient branch guides the denoising branch by fusing gradient information to enhance structure preservation.</p>
Full article ">Figure 3
<p>The flow diagram of the proposed DGGNet.</p>
Full article ">Figure 4
<p>Denoising visualization of our proposed DGGNet comparing competing methods on the ultrasound dataset. From left to right, we show the clean, noisy, and denoising results of SRAD [<a href="#B23-fractalfract-08-00666" class="html-bibr">23</a>], OBNLM [<a href="#B8-fractalfract-08-00666" class="html-bibr">8</a>], NLLRF [<a href="#B7-fractalfract-08-00666" class="html-bibr">7</a>], MHM [<a href="#B35-fractalfract-08-00666" class="html-bibr">35</a>], DnCNN [<a href="#B16-fractalfract-08-00666" class="html-bibr">16</a>], RIDNet [<a href="#B17-fractalfract-08-00666" class="html-bibr">17</a>], MSANN [<a href="#B20-fractalfract-08-00666" class="html-bibr">20</a>] and our proposed DGGNet.</p>
Full article ">Figure 5
<p>Denoising visualization of our proposed DGGNet comparing competing methods on the ultrasound dataset. From left to right, we show the ground truth, noisy, and denoising results of SRAD [<a href="#B23-fractalfract-08-00666" class="html-bibr">23</a>], OBNLM [<a href="#B8-fractalfract-08-00666" class="html-bibr">8</a>], NLLRF [<a href="#B7-fractalfract-08-00666" class="html-bibr">7</a>], DnCNN [<a href="#B16-fractalfract-08-00666" class="html-bibr">16</a>], MHM [<a href="#B35-fractalfract-08-00666" class="html-bibr">35</a>], RIDNet [<a href="#B17-fractalfract-08-00666" class="html-bibr">17</a>], MSANN [<a href="#B20-fractalfract-08-00666" class="html-bibr">20</a>], and our DGGNet.</p>
Full article ">Figure 6
<p>Denoising visualization of our proposed DGGNet compares competing methods on the realistic experiments data. From left to right, we show the noisy, denoising results of SRAD [<a href="#B23-fractalfract-08-00666" class="html-bibr">23</a>], OBNLM [<a href="#B8-fractalfract-08-00666" class="html-bibr">8</a>], NLLRF [<a href="#B7-fractalfract-08-00666" class="html-bibr">7</a>], MHM [<a href="#B35-fractalfract-08-00666" class="html-bibr">35</a>], DnCNN [<a href="#B16-fractalfract-08-00666" class="html-bibr">16</a>], RIDNet [<a href="#B17-fractalfract-08-00666" class="html-bibr">17</a>], MSANN [<a href="#B20-fractalfract-08-00666" class="html-bibr">20</a>] and our proposed DGGNet.</p>
Full article ">Figure 7
<p>Average feature maps of results of the upsampling block in the decoding architecture of the denoising branch in our proposed DGGNet. The top image in (<b>a</b>) is our denoising result, and the bottom image is the corresponding noisy image. (<b>b</b>–<b>e</b>) are the average feature maps of <math display="inline"><semantics> <mrow> <mn>16</mn> <mo>×</mo> <mn>16</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mn>32</mn> <mo>×</mo> <mn>32</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mn>64</mn> <mo>×</mo> <mn>64</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mn>128</mn> <mo>×</mo> <mn>128</mn> </mrow> </semantics></math> in the denoising branch of the decoding structure. The upper images of those image pairs are the average feature map of the denoising branch with the gradient branch, while the lower images are not. This shows that with the guide of the gradient branch in our DGGNet, the denoising result can preserve structure information better.</p>
Full article ">
22 pages, 7753 KiB  
Article
Radar Echo Extrapolation Based on Translator Coding and Decoding Conditional Generation Adversarial Network
by Xingang Mou, Yuan He, Wenfeng Li and Xiao Zhou
Appl. Sci. 2024, 14(22), 10550; https://doi.org/10.3390/app142210550 - 15 Nov 2024
Viewed by 291
Abstract
In response to the shortcomings of current spatiotemporal prediction models, which frequently encounter difficulties in temporal feature extraction and the forecasting of medium to high echo intensity regions over extended sequences, this study presents a novel model for radar echo extrapolation that combines [...] Read more.
In response to the shortcomings of current spatiotemporal prediction models, which frequently encounter difficulties in temporal feature extraction and the forecasting of medium to high echo intensity regions over extended sequences, this study presents a novel model for radar echo extrapolation that combines a translator encoder-decoder architecture with a spatiotemporal dual-discriminator conditional generative adversarial network (STD-TranslatorNet). Initially, an image reconstruction network is established as the generator, employing a combination of a temporal attention unit (TAU) and an encoder–decoder framework. Within this architecture, both intra-frame static attention and inter-frame dynamic attention mechanisms are utilized to derive attention weights across image channels, thereby effectively capturing the temporal evolution of time series images. This approach enhances the network’s capacity to comprehend local spatial features alongside global temporal dynamics. The encoder–decoder configuration further bolsters the network’s proficiency in feature extraction through image reconstruction. Subsequently, the spatiotemporal dual discriminator is crafted to encapsulate both temporal correlations and spatial attributes within the generated image sequences. This design serves to effectively steer the generator’s output, thereby augmenting the realism of the produced images. Lastly, a composite multi-loss function is proposed to enhance the network’s capability to model intricate spatiotemporal evolving radar echo data, facilitating a more comprehensive assessment of the quality of the generated images, which in turn fortifies the network’s robustness. Experimental findings derived from the standard radar echo dataset (SRAD) reveal that the proposed radar echo extrapolation technique exhibits superior performance, with average critical success index (CSI) and probability of detection (POD) metrics per frame increasing by 6.9% and 7.6%, respectively, in comparison to prior methodologies. Full article
Show Figures

Figure 1

Figure 1
<p>STD-TranslatorNet network model architecture diagram: (<b>a</b>) The overall structure of the generator; (<b>b</b>) the detailed structure of the generator; (<b>c</b>) TAU; (<b>d</b>) dual temporal and spatial discriminators; (<b>e</b>) 3DBlock module; (<b>f</b>) inception module.</p>
Full article ">Figure 2
<p>Schematic diagram of translator coder structure.</p>
Full article ">Figure 3
<p>Schematic of deep convolution with small kernel.</p>
Full article ">Figure 4
<p>Schematic of inter-frame dynamic attention network.</p>
Full article ">Figure 5
<p>Schematic of the S2D algorithm.</p>
Full article ">Figure 6
<p>Schematic of the structure of the time–space dual discriminator.</p>
Full article ">Figure 7
<p>Schematic of 3D convolution.</p>
Full article ">Figure 8
<p>Model training process diagram.</p>
Full article ">Figure 9
<p>Sample dataset visualization.</p>
Full article ">Figure 10
<p>Radar data noise filtering.</p>
Full article ">Figure 11
<p>Evaluation index processing.</p>
Full article ">Figure 12
<p>Standard spatiotemporal prediction model temporal forecasting visualization example.</p>
Full article ">Figure 13
<p>Standard spatiotemporal prediction model temporal prediction local zoom visualization example.</p>
Full article ">Figure 14
<p>Comparative case study on the standard SEVIR dataset.</p>
Full article ">Figure 15
<p>Translator replacement comparison validation experiment model prediction visualization sample.</p>
Full article ">Figure 16
<p>CSI metric line chart and box plot of 10-frame prediction images for translator replacement experiment at 30 dBZ: (<b>a</b>) CSI metric line chart of 10-frame prediction images; (<b>b</b>) CSI metric box plot of 10-frame prediction images.</p>
Full article ">Figure 17
<p>POD metric line chart and box plot of 10-frame prediction images for translator replacement experiment at 30 dBZ: (<b>a</b>) POD metric line chart of 10-frame prediction images; (<b>b</b>) POD metric box plot of 10-frame prediction images.</p>
Full article ">Figure 18
<p>FAR metric line chart and box plot of 10-frame prediction images for translator replacement experiment at 30 dBZ: (<b>a</b>) FAR metric line chart of 10-frame prediction images; (<b>b</b>) FAR metric box plot of 10-frame prediction images.</p>
Full article ">
25 pages, 2899 KiB  
Article
Learning Omni-Dimensional Spatio-Temporal Dependencies for Millimeter-Wave Radar Perception
by Hang Yan, Yongji Li, Luping Wang and Shichao Chen
Remote Sens. 2024, 16(22), 4256; https://doi.org/10.3390/rs16224256 - 15 Nov 2024
Viewed by 522
Abstract
Reliable environmental perception capabilities are a prerequisite for achieving autonomous driving. Cameras and LiDAR are sensitive to illumination and weather conditions, while millimeter-wave radar avoids these issues. Existing models rely heavily on image-based approaches, which may not be able to fully characterize radar [...] Read more.
Reliable environmental perception capabilities are a prerequisite for achieving autonomous driving. Cameras and LiDAR are sensitive to illumination and weather conditions, while millimeter-wave radar avoids these issues. Existing models rely heavily on image-based approaches, which may not be able to fully characterize radar sensor data or efficiently further utilize them for perception tasks. This paper rethinks the approach to modeling radar signals and proposes a novel U-shaped multilayer perceptron network (U-MLPNet) that aims to enhance the learning of omni-dimensional spatio-temporal dependencies. Our method involves innovative signal processing techniques, including a 3D CNN for spatio-temporal feature extraction and an encoder–decoder framework with cross-shaped receptive fields specifically designed to capture the sparse and non-uniform characteristics of radar signals. We conducted extensive experiments using a diverse dataset of urban driving scenarios to characterize the sensor’s performance in multi-view semantic segmentation and object detection tasks. Experiments showed that U-MLPNet achieves competitive performance against state-of-the-art (SOTA) methods, improving the mAP by 3.0% and mDice by 2.7% in RD segmentation and AR and AP by 1.77% and 2.03%, respectively, in object detection. These improvements signify an advancement in radar-based perception for autonomous vehicles, potentially enhancing their reliability and safety across diverse driving conditions. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The complete millimeter-wave radar signal collection and preprocessing pipeline. First, the received and transmitted signals are mixed to generate raw ADC data. These signals are then subjected to various forms of FFT algorithms, resulting in the RA view, RD view, and RAD tensor, which are the RF signals prepared for further processing.</p>
Full article ">Figure 2
<p>Overall framework of our U-MLPNet. The left part represents the multi-view encoder, the middle part is the latent space, and the right part is the dual-view decoder. The skip connections between the encoder and decoder effectively maintain the disparities between different perspectives and balance model performance. The latent space contains the U-MLP module, which can efficiently fuse multi-scale, multi-view global and local spatio-temporal features.</p>
Full article ">Figure 3
<p>Radar RF features. The top row illustrates the CARRADA dataset with RGB images and RA, RD, and AD views arranged from left to right. The bottom row shows the echo of the CRUW dataset, with RGB images on the left and RA images on the right.</p>
Full article ">Figure 4
<p>Overall framework of our U-MLP. The left side the encoder, while the right side represents the decoder. The encoder employs a lightweight MLP to extract meaningful radar features. The decoder progressively integrates these features and restores resolution in a stepwise manner.</p>
Full article ">Figure 5
<p>The receptive field of U-MLP. The original receptive field, the receptive field proposed in this paper, and the equivalent guard band are displayed from left to right. Feature points, the guard band, and feature regions are distinguished by orange, a blue diagonal grid, and light blue, respectively.</p>
Full article ">Figure 6
<p>Visual comparison of RA views for various algorithms on the CARRADA dataset. The pedestrian category is annotated in red, the car category in blue, and the cyclist category in green.</p>
Full article ">Figure 7
<p>Visual comparison of RD views for various algorithms on the CARRADA dataset. The pedestrian category is highlighted in red, the car category in blue, and the cyclist category in green. (<b>a</b>–<b>h</b>) RGB images, RF images, ground truth (GT), U-MLPNet, TransRadar, PeakConv, TMVA-Net, and MVNet, respectively.</p>
Full article ">Figure 8
<p>Polar plot of RD views for various algorithms on the CARRADA dataset across different categories. Each line represents the mIoU of a specific algorithm across these categories, with higher values indicating superior performance.</p>
Full article ">Figure 9
<p>Visual comparison of RA views for various algorithms on the CRUW dataset. The pedestrian category is annotated in red, the car category in blue, and the cyclist category in green.</p>
Full article ">Figure 10
<p>To evaluate the performance and robustness of U-MLPNet in complex environments, we conduct qualitative testing using a nighttime dataset.</p>
Full article ">
21 pages, 3319 KiB  
Article
Seamless Optimization of Wavelet Parameters for Denoising LFM Radar Signals: An AI-Based Approach
by Talaat Abdelfattah, Ali Maher, Ahmed Youssef and Peter F. Driessen
Remote Sens. 2024, 16(22), 4211; https://doi.org/10.3390/rs16224211 - 12 Nov 2024
Viewed by 338
Abstract
Linear frequency modulation (LFM) signals are pivotal in radar systems, enabling high-resolution measurements and target detection. However, these signals are often degraded by noise, significantly impacting their processing and interpretation. Traditional denoising methods, including wavelet-based techniques, have been extensively used to address this [...] Read more.
Linear frequency modulation (LFM) signals are pivotal in radar systems, enabling high-resolution measurements and target detection. However, these signals are often degraded by noise, significantly impacting their processing and interpretation. Traditional denoising methods, including wavelet-based techniques, have been extensively used to address this issue, yet they often fall short in terms of optimizing performance due to fixed parameter settings. This paper introduces an innovative approach by combining wavelet denoising with long short-term memory (LSTM) networks specifically tailored for LFM signals in radar systems. By generating a dataset of LFM signals at various signal-to-noise Ratios (SNR) to ensure diversity, we systematically identified the optimal wavelet parameters for each noisy instance. These parameters served as training labels for the proposed LSTM-based architecture, which learned to predict the most effective denoising parameters for a given noisy LFM signal. Our findings reveal a significant enhancement in denoising performance, attributed to the optimized wavelet parameters derived from the LSTM predictions. This advancement not only demonstrates a superior denoising capability but also suggests a substantial improvement in radar signal processing, potentially leading to more accurate and reliable radar detections and measurements. The implications of this paper extend beyond modern radar applications, offering a framework for integrating deep learning techniques with traditional signal processing methods to optimize performance across various noise-dominated domains. Full article
Show Figures

Figure 1

Figure 1
<p>Block diagram of the DWT-based denoising process for LFM radar signals.</p>
Full article ">Figure 2
<p>LSTM building block which acts as a memory cell by handling input, output, and forget gates.</p>
Full article ">Figure 3
<p>The block diagram of the workflow integrating deep learning with wavelet denoising.</p>
Full article ">Figure 4
<p>Proposed network architecture designed for classifying the best wavelet parameter for denoising LFM signals.</p>
Full article ">Figure 5
<p>Distribution of optimal wavelet parameters.</p>
Full article ">Figure 6
<p>Distribution of optimal wavelet parameters across SNR levels. (<b>a</b>) Histogram of optimal mother functions across SNR levels. (<b>b</b>) Histogram of optimal decomposition levels across SNR levels. (<b>c</b>) Histogram of threshold rule selection across SNR levels.</p>
Full article ">Figure 7
<p>Gaussian distribution of SNR values for each mother function and for each threshold rule.</p>
Full article ">Figure 8
<p>Empirical cumulative distribution functions (CDFs) of SNR values for Bayes and SURE threshold rules, illustrating the significant difference between the two distributions.</p>
Full article ">Figure 9
<p>Focused analysis of the prevalence of Bayes and SURE threshold rules at different SNR levels, illustrating the adaptive nature of threshold rule selection in response to SNR variations.</p>
Full article ">Figure 10
<p>The training progress and average loss of the deep learning model across a broad range of SNR levels.</p>
Full article ">Figure 11
<p>The training progress and average loss of the deep learning model for a selective set of SNR levels.</p>
Full article ">Figure 12
<p>Box plot of SNR distribution before and after denoising.</p>
Full article ">Figure 13
<p>Scatter plot of noisy and denoised SNR of LFM signals.</p>
Full article ">Figure 14
<p>Spectrograms of noisy and denoised LFM signals with various SNR levels; the top row is the noisy LFM signal, and the bottom row is the corresponding denoised LFM signal.</p>
Full article ">
25 pages, 34342 KiB  
Article
Quantifying the Geomorphological Susceptibility of the Piping Erosion in Loess Using LiDAR-Derived DEM and Machine Learning Methods
by Sisi Li, Sheng Hu, Lin Wang, Fanyu Zhang, Ninglian Wang, Songbai Wu, Xingang Wang and Zongda Jiang
Remote Sens. 2024, 16(22), 4203; https://doi.org/10.3390/rs16224203 - 11 Nov 2024
Viewed by 463
Abstract
Soil piping erosion is an underground soil erosion process that is significantly underestimated or overlooked. It can lead to intense soil erosion and trigger surface processes such as landslides, collapses, and channel erosion. Conducting susceptibility mapping is a vital way to identify the [...] Read more.
Soil piping erosion is an underground soil erosion process that is significantly underestimated or overlooked. It can lead to intense soil erosion and trigger surface processes such as landslides, collapses, and channel erosion. Conducting susceptibility mapping is a vital way to identify the potential for soil piping erosion, which is of enormous significance for soil and water conservation as well as geological disaster prevention. This study utilized airborne radar drones to survey and map 1194 sinkholes in Sunjiacha basin, Huining County, on the Loess Plateau in Northwest China. We identified seventeen key hydrogeomorphological factors that influence sinkhole susceptibility and used six machine learning models—support vector machine (SVM), logistic regression (LR), Convolutional Neural Network (CNN), K-Nearest Neighbors (KNN), random forest (RF), and gradient boosting decision tree (GBDT)—for the susceptibility assessment and mapping of loess sinkholes. We then evaluated and validated the prediction results of various models using the area under curve (AUC) of the Receiver Operating Characteristic Curve (ROC). The results showed that all six of these machine learning algorithms had an AUC of more than 0.85. The GBDT model had the best predictive accuracy (AUC = 0.94) and model migration performance (AUC = 0.93), and it could find sinkholes with high and very high susceptibility levels in loess areas. This suggests that the GBDT model is well suited for the fine-scale susceptibility mapping of sinkholes in loess regions. Full article
Show Figures

Figure 1

Figure 1
<p>Study area overview: (<b>a</b>) location; (<b>b</b>) regional geological map; (<b>c</b>) digital orthophoto map (DOM) by UAS optical camera; (<b>d</b>) LiDAR-derived DEM.</p>
Full article ">Figure 2
<p>The technical flow chart of this study.</p>
Full article ">Figure 3
<p>UAS survey and results: (<b>a</b>) route planning; (<b>b</b>,<b>c</b>) Feima D2000 UAS; (<b>d</b>–<b>g</b>) ground control point surveying using handheld RTK; (<b>h</b>) local point clouds acquired by D-LiDAR2000; (<b>i</b>) local DOM acquired by D-CAM2000; (<b>j</b>–<b>l</b>) typical sinkhole photos taken in field. The red circle is the artificial interpretation of the sinkhole polygon in (<b>i</b>–<b>l</b>).</p>
Full article ">Figure 4
<p>Geomorphic factor mapping for evaluating sinkhole susceptibility.</p>
Full article ">Figure 4 Cont.
<p>Geomorphic factor mapping for evaluating sinkhole susceptibility.</p>
Full article ">Figure 5
<p>Sinkhole susceptibility maps and frequency distribution histograms of grid by six machine learning methods, (<b>a</b>) SVM, (<b>b</b>) LR, (<b>c</b>) CNN, (<b>d</b>) KNN, (<b>e</b>) RF, and (<b>f</b>) GBDT, where mean stands for average and SD stands for standard deviation.</p>
Full article ">Figure 6
<p>Areal comparison of susceptibility grades for six models.</p>
Full article ">Figure 7
<p>Comparison of the area under the ROC curves (AUC) for six models in the validation step.</p>
Full article ">Figure 8
<p>Geomorphological susceptibility mapping of loess sinkholes based on the GBDT model: (<b>a</b>) whole watershed; (<b>b</b>) shallow gully; (<b>c</b>) sub-catchment; (<b>d</b>) old landslide body; (<b>e</b>) the heads of several erosion gullies; (<b>f</b>) terrace.</p>
Full article ">Figure 9
<p>The sinkhole area proportion of different susceptibility grades by the GBDT model.</p>
Full article ">Figure 10
<p>Comparison of area under ROC curve (AUC) of six models in validation area.</p>
Full article ">Figure 11
<p>Geomorphological susceptibility mapping by GBDT model in validation area: (<b>a</b>) location of validation area; (<b>b</b>) susceptibility zoning map by GBDT model.</p>
Full article ">
25 pages, 24547 KiB  
Article
A Radio Frequency Interference Screening Framework—From Quick-Look Detection Using Statistics-Assisted Network to Raw Echo Tracing
by Jiayuan Shen, Bing Han, Yang Li, Zongxu Pan, Di Yin, Yugang Feng and Guangzuo Li
Remote Sens. 2024, 16(22), 4195; https://doi.org/10.3390/rs16224195 - 11 Nov 2024
Viewed by 351
Abstract
Synthetic aperture radar (SAR) is often affected by other high-power electromagnetic devices during ground observation, which causes unintentional radio frequency interference (RFI) with the acquired echo, bringing adverse effects into data processing and image interpretation. When faced with the task of screening massive [...] Read more.
Synthetic aperture radar (SAR) is often affected by other high-power electromagnetic devices during ground observation, which causes unintentional radio frequency interference (RFI) with the acquired echo, bringing adverse effects into data processing and image interpretation. When faced with the task of screening massive SAR data, there is an urgent need for the global perception and detection of interference. The existing RFI detection method usually only uses a single type of data for detection, ignoring the information association between the data at all levels of the real SAR product, resulting in some computational redundancy. Meanwhile, current deep learning-based algorithms are often unable to locate the range of RFI coverage in the azimuth direction. Therefore, a novel RFI processing framework from quick-looks to single-look complex (SLC) data and then to raw echo is proposed. We take the data of Sentinel-1 terrain observation with progressive scan (TOPS) mode as an example. By combining the statistics-assisted network with the sliding-window algorithm and the error-tolerant training strategy, it is possible to accurately detect and locate RFI in the quick looks of an SLC product. Then, through the analysis of the TOPSAR imaging principle, the position of the RFI in the SLC image is preliminarily confirmed. The possible distribution of the RFI in the corresponding raw echo is further inferred, which is one of the first attempts to use spaceborne SAR data to elucidate the RFI location mapping relationship between image data and raw echo. Compared with directly detecting all of the SLC data, the time for the proposed framework to determine the RFI distribution in the SLC data can be shortened by 53.526%. All the research in this paper is conducted on Sentinel-1 real data, which verify the feasibility and effectiveness of the proposed framework for radio frequency signals monitoring in advanced spaceborne SAR systems. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of Sentinel-1 SAR data. (<b>a</b>) Quick-look images of the SLC product with RFI. (<b>b</b>) Correspondence and data volume comparison between Sentinel-1 raw echo data, SLC products, GRD products and their corresponding quick-look images. It can be seen that the size of the quick looks is much smaller than the original data, making them easier and faster to process.</p>
Full article ">Figure 2
<p>Comparison of our algorithm and the previous deep learning-based methods. (<b>a1</b>) Previous method in image domain. (<b>a2</b>) Previous method in signal domain. (<b>b</b>) Our algorithm utilizes the information and associations with the product at all levels to form a more comprehensive processing framework. The input are quick looks and parameter files. The first step is to obtain the distribution of RFI in quick looks through network processing. The second step combines the sensing time and image size information to map the distribution results in the SLC product. The third step is to trace back to the possible RFI distribution in the original echo based on the imaging principle.</p>
Full article ">Figure 3
<p>The principle of SAR imaging and interference formation. (<b>a</b>) The diagram of TOPSAR mode of Sentinel-1 satellite. (<b>b</b>) Schematic diagram of the formation mechanism of SAR RFI. It can be seen that the electromagnetic wave emitted by the radiation source of external system is mixed into the raw echo, which interferes with the original useful signal.</p>
Full article ">Figure 4
<p>The overview of our proposed method. (<b>a</b>) illustrates the flowchart of the training and testing stage of QLDecN model. (<b>b</b>) illustrates the flowchart for the entire detection from quick-look images to real SLC data. (<b>c</b>) indicates the mapping from the preview quick-looks to the slices to be selected by the sliding-window and from quick-looks to SLC images, respectively.</p>
Full article ">Figure 5
<p>The impact of RFI on the image histogram of the Sentinel-1 quick-look images. (<b>a1</b>) and (<b>b1</b>), respectively, represent image slices affected by RFI. (<b>c1</b>) represents the image slice without RFI. (<b>a2</b>–<b>c2</b>) show the R channels of the three images, respectively, and (<b>a3</b>–<b>c3</b>) are their corresponding histograms. (<b>a4</b>–<b>c4</b>) are the G channels of the three pictures, respectively, and (<b>a5</b>–<b>c5</b>) are the corresponding histograms. (<b>a6</b>–<b>c6</b>) are the B channels of the three pictures, respectively, and (<b>a7</b>–<b>c7</b>) are the corresponding histograms. The presence of RFI will significantly affect the analysis results of the histogram. The histogram distribution of an image without RFI is often smooth, which is marked with a red dotted line. But the presence of RFI will lead to an increase in the number of pixels with higher values, resulting in longer tails or second peaks, which is marked with blue curve and boxes.</p>
Full article ">Figure 6
<p>The structure of QLDecN. The QLDecN consists of two branches. The lower branch in the schematic diagram is improved from the residual network and is used for automatic extraction of RFI features in images. The upper branch is responsible for extracting the statistical characteristics of the RFI. The obtained features are processed by a 1D-CNN, and then pixel-level corresponding fusion is performed with the features of the lower branch.</p>
Full article ">Figure 7
<p>The moving rules of the sliding-window. The left part shows it moves without overlap in the range direction, and the right part shows it moves row-by-row in the azimuth direction. Take <math display="inline"><semantics> <mrow> <mi>z</mi> <mo>=</mo> <mn>4</mn> </mrow> </semantics></math> as an example.</p>
Full article ">Figure 8
<p>Example of edge interference in the cut slices. (<b>a</b>) Normal slices. (<b>b</b>) Part of RFI is at the edge of the slice. (<b>c</b>) The RFI is located at the edge of the slice, marked by the red circles and red arrows in the figure. This situation is what we define as edge interference.</p>
Full article ">Figure 9
<p>(<b>a</b>) The observation order for each sub-swath and burst in Sentinel-1 TOPSAR imaging mode. (<b>b</b>) The mapping of sensing time of each sub-swath and burst on the time axis.</p>
Full article ">Figure 10
<p>The left diagram shows the geometric relationship of radar antenna beam when it illuminates the area near the radiation source. It is assumed that the interference source has an effect on only one echo in the range direction. The right part shows the possible cases of interference in the obtained raw echo and image products.</p>
Full article ">Figure 11
<p>The dataset used for QLDecN training. This dataset is made from quick-look images of the Sentinel-1 SLC products. (<b>a</b>) shows slices with RFI. (<b>b</b>) shows slices without RFI. The background scenes of these two sets do not necessarily correspond. The RFI patterns in the dataset are diverse.</p>
Full article ">Figure 12
<p>The training result of QLDecN under different learning rates. (<b>a</b>) The cross-entropy loss of the testing set as the number of epochs increases. (<b>b</b>) The change in the classification accuracy of the network as the number of epochs increases.</p>
Full article ">Figure 13
<p>The changing curves of loss and accuracy during the training process in the ablation study of the statistical feature branch. (<b>a</b>) The cross-entropy loss. (<b>b</b>) The classification accuracy.</p>
Full article ">Figure 14
<p>Detection results of the processing framework. (<b>a</b>) Data acquired by Sentinel-1 in Beijing, China in December 2019. (<b>b</b>) Data acquired by Sentinel-1 in Daejeon, South Korea in February 2019. The presence of RFI, the serial number of sub-swath and the range of pixel rows where RFI exists can be detected in the quick-look images, and the sub-swath and burst number can be determined correspondingly to the SLC image product using the parameters information.</p>
Full article ">Figure 15
<p>The information of real Sentinel-1 product used for verification and the mapping relationship between different types of data.</p>
Full article ">Figure 16
<p>Trace the interference pattern that appears in the SLC data back to the raw echo to find the pulse number where the same RFI is located (shown with the time-frequency spectrograms). (<b>a</b>) The correspondence of RFI between the Line 305 of SLC data and the raw echo. (<b>b</b>) The correspondence of RFI between the Line 565 of SLC data and the raw echo. The left part shows two spectrograms with RFI of SLC data, and the right shows the corresponding RFI spectrograms in different pulses of raw echo. Boxes of the same color indicate the same RFI. It can be seen that the RFI signals appearing in a certain row of SLC data may appear in different pulses within a synthetic aperture time corresponding to that row when traced back to the raw echo.</p>
Full article ">
13 pages, 1059 KiB  
Article
Joint Sensing and Communications in Unmanned-Aerial-Vehicle-Assisted Systems
by Petros S. Bithas, George P. Efthymoglou, Athanasios G. Kanatas and Konstantinos Maliatsos
Drones 2024, 8(11), 656; https://doi.org/10.3390/drones8110656 - 8 Nov 2024
Viewed by 473
Abstract
The application of joint sensing and communications (JSACs) technology in air–ground networks, which include unmanned aerial vehicles (UAVs), offers unique opportunities for improving both sensing and communication performances. However, this type of network is also sensitive to the peculiar characteristics of the aerial [...] Read more.
The application of joint sensing and communications (JSACs) technology in air–ground networks, which include unmanned aerial vehicles (UAVs), offers unique opportunities for improving both sensing and communication performances. However, this type of network is also sensitive to the peculiar characteristics of the aerial communications environment, which include shadowing and scattering caused by man-made structures. This paper investigates an aerial JSAC network and proposes a UAV-selection strategy that is shown to improve the communication performance. We first derive analytical expressions for the received signal-to-interference ratio for both communication and sensing functions. These expressions are then used to analyze the outage and coverage probability of the communication part, as well as the ergodic radar estimation information rate and the detection probability of the sensing part. Moreover, a performance trade-off is investigated under the assumption of a total bandwidth constraint. Various numerical evaluated results have been presented complemented by equivalent simulated ones. These results reveal the applicability of the proposed analysis, as well as the impact of shadowing and multipath fading severity, and interference on the system’s performance. Full article
Show Figures

Figure 1

Figure 1
<p>System model of the considered JSAC scheme.</p>
Full article ">Figure 2
<p>Communication Function: OP vs. the outage threshold. Sensing Function: EREIR vs. transmit power.</p>
Full article ">Figure 3
<p>Coverage probability and detection probability vs. transmit power for different number of interfering sources.</p>
Full article ">Figure 4
<p>Coverage probability and detection probability trade-off under bandwidth allocation for different distances.</p>
Full article ">Figure 5
<p>Outage probability vs. the outage threshold for different values of <span class="html-italic">m</span>.</p>
Full article ">Figure 6
<p>Outage probability vs. the outage threshold for different values of <span class="html-italic">L</span>.</p>
Full article ">
15 pages, 11951 KiB  
Technical Note
Axis Estimation of Spaceborne Targets via Inverse Synthetic Aperture Radar Image Sequence Based on Regression Network
by Wenjing Guo, Qi Yang, Hongqiang Wang and Chenggao Luo
Remote Sens. 2024, 16(22), 4148; https://doi.org/10.3390/rs16224148 - 7 Nov 2024
Viewed by 329
Abstract
Axial estimation is an important task for detecting non-cooperative space targets in orbit, with inverse synthetic aperture radar (ISAR) imaging serving as a fundamental approach to facilitate this process. However, most of the existing axial estimation methods usually rely on manually extracting and [...] Read more.
Axial estimation is an important task for detecting non-cooperative space targets in orbit, with inverse synthetic aperture radar (ISAR) imaging serving as a fundamental approach to facilitate this process. However, most of the existing axial estimation methods usually rely on manually extracting and matching features of key corner points or linear structures in the images, which may result in a degradation in estimation accuracy. To address these issues, this paper proposes an axial estimation method for spaceborne targets via ISAR image sequences based on a regression network. Firstly, taking the ALOS satellite as an example, its Computer-Aided Design (CAD) model is constructed through a prior analysis of its structural features. Subsequently, target echoes are generated using electromagnetic simulation software, followed by imaging processing, analysis of imaging characteristics, and the determination of axial labels. Finally, in contrast to traditional classification approaches, this study introduces a straightforward yet effective regression network specifically designed for ISAR image sequences. This network transforms the classification loss into a loss function constrained by the minimum mean square error, which can be utilized to adaptively perform the feature extraction and estimation of axial parameters. The effectiveness of the proposed method is validated through both electromagnetic simulations and experimental data. Full article
(This article belongs to the Special Issue Recent Advances in Nonlinear Processing Technique for Radar Sensing)
Show Figures

Figure 1

Figure 1
<p>The overall framework of axial estimation.</p>
Full article ">Figure 2
<p>Definition of the orbit and body coordinate system.</p>
Full article ">Figure 3
<p>Definition of the yaw and pitch angle.</p>
Full article ">Figure 4
<p>ALOS satellite modeling and typical electromagnetic simulation imaging. (<b>a</b>) In-orbit schematic of the ALOS satellite; (<b>b</b>) CAD model; (<b>c</b>) Schematic of imaging result.</p>
Full article ">Figure 5
<p>Imaging results of the ALOS satellite at typical attitude angles.</p>
Full article ">Figure 6
<p>Architecture of the regression network.</p>
Full article ">Figure 7
<p>The sequence of ISAR images whose pitch angles are all 70° and yaw angles are 155°, 160°, and 165° from left to right.</p>
Full article ">Figure 8
<p>Typical ISAR imaging results with corresponding CAD modules. (<b>a</b>) Typical ISAR imaging results; (<b>b</b>) CAD modules.</p>
Full article ">Figure 8 Cont.
<p>Typical ISAR imaging results with corresponding CAD modules. (<b>a</b>) Typical ISAR imaging results; (<b>b</b>) CAD modules.</p>
Full article ">Figure 9
<p>The imaging results under varying SNR levels. (<b>a</b>) SNR: 0; (<b>b</b>) SNR: 5; (<b>c</b>) SNR: 10.</p>
Full article ">Figure 10
<p>Yaw and pitch angle estimation errors for different loss functions.</p>
Full article ">Figure 11
<p>Mean estimation errors for various yaw angle intervals in different pitch angle. (<b>a</b>) 15°; (<b>b</b>) 30°; (<b>c</b>) 45°; (<b>d</b>) 60°; (<b>e</b>) 75°.</p>
Full article ">Figure 12
<p>Feature visualization with different convolutional layers. (<b>a</b>) First convolutional layer; (<b>b</b>) Second convolutional layer; (<b>c</b>) Third convolutional layer; (<b>d</b>) Fourth convolutional layer; (<b>e</b>) Convolutional layers for yaw estimation; (<b>f</b>) Convolutional layers for pitch estimation.</p>
Full article ">Figure 12 Cont.
<p>Feature visualization with different convolutional layers. (<b>a</b>) First convolutional layer; (<b>b</b>) Second convolutional layer; (<b>c</b>) Third convolutional layer; (<b>d</b>) Fourth convolutional layer; (<b>e</b>) Convolutional layers for yaw estimation; (<b>f</b>) Convolutional layers for pitch estimation.</p>
Full article ">Figure 13
<p>Real images of the satellite.</p>
Full article ">Figure 14
<p>The imaging results at three different azimuth angles. (<b>a</b>) yaw: 0°; (<b>b</b>) yaw: 45°; (<b>c</b>) yaw: 75°; (<b>d</b>) 90°; (<b>e</b>) yaw: 100°; (<b>f</b>) yaw: 115°.</p>
Full article ">
Back to TopTop