Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (379)

Search Parameters:
Keywords = adaptive radar detection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 11951 KiB  
Technical Note
Axis Estimation of Spaceborne Targets via Inverse Synthetic Aperture Radar Image Sequence Based on Regression Network
by Wenjing Guo, Qi Yang, Hongqiang Wang and Chenggao Luo
Remote Sens. 2024, 16(22), 4148; https://doi.org/10.3390/rs16224148 - 7 Nov 2024
Viewed by 306
Abstract
Axial estimation is an important task for detecting non-cooperative space targets in orbit, with inverse synthetic aperture radar (ISAR) imaging serving as a fundamental approach to facilitate this process. However, most of the existing axial estimation methods usually rely on manually extracting and [...] Read more.
Axial estimation is an important task for detecting non-cooperative space targets in orbit, with inverse synthetic aperture radar (ISAR) imaging serving as a fundamental approach to facilitate this process. However, most of the existing axial estimation methods usually rely on manually extracting and matching features of key corner points or linear structures in the images, which may result in a degradation in estimation accuracy. To address these issues, this paper proposes an axial estimation method for spaceborne targets via ISAR image sequences based on a regression network. Firstly, taking the ALOS satellite as an example, its Computer-Aided Design (CAD) model is constructed through a prior analysis of its structural features. Subsequently, target echoes are generated using electromagnetic simulation software, followed by imaging processing, analysis of imaging characteristics, and the determination of axial labels. Finally, in contrast to traditional classification approaches, this study introduces a straightforward yet effective regression network specifically designed for ISAR image sequences. This network transforms the classification loss into a loss function constrained by the minimum mean square error, which can be utilized to adaptively perform the feature extraction and estimation of axial parameters. The effectiveness of the proposed method is validated through both electromagnetic simulations and experimental data. Full article
(This article belongs to the Special Issue Recent Advances in Nonlinear Processing Technique for Radar Sensing)
Show Figures

Figure 1

Figure 1
<p>The overall framework of axial estimation.</p>
Full article ">Figure 2
<p>Definition of the orbit and body coordinate system.</p>
Full article ">Figure 3
<p>Definition of the yaw and pitch angle.</p>
Full article ">Figure 4
<p>ALOS satellite modeling and typical electromagnetic simulation imaging. (<b>a</b>) In-orbit schematic of the ALOS satellite; (<b>b</b>) CAD model; (<b>c</b>) Schematic of imaging result.</p>
Full article ">Figure 5
<p>Imaging results of the ALOS satellite at typical attitude angles.</p>
Full article ">Figure 6
<p>Architecture of the regression network.</p>
Full article ">Figure 7
<p>The sequence of ISAR images whose pitch angles are all 70° and yaw angles are 155°, 160°, and 165° from left to right.</p>
Full article ">Figure 8
<p>Typical ISAR imaging results with corresponding CAD modules. (<b>a</b>) Typical ISAR imaging results; (<b>b</b>) CAD modules.</p>
Full article ">Figure 8 Cont.
<p>Typical ISAR imaging results with corresponding CAD modules. (<b>a</b>) Typical ISAR imaging results; (<b>b</b>) CAD modules.</p>
Full article ">Figure 9
<p>The imaging results under varying SNR levels. (<b>a</b>) SNR: 0; (<b>b</b>) SNR: 5; (<b>c</b>) SNR: 10.</p>
Full article ">Figure 10
<p>Yaw and pitch angle estimation errors for different loss functions.</p>
Full article ">Figure 11
<p>Mean estimation errors for various yaw angle intervals in different pitch angle. (<b>a</b>) 15°; (<b>b</b>) 30°; (<b>c</b>) 45°; (<b>d</b>) 60°; (<b>e</b>) 75°.</p>
Full article ">Figure 12
<p>Feature visualization with different convolutional layers. (<b>a</b>) First convolutional layer; (<b>b</b>) Second convolutional layer; (<b>c</b>) Third convolutional layer; (<b>d</b>) Fourth convolutional layer; (<b>e</b>) Convolutional layers for yaw estimation; (<b>f</b>) Convolutional layers for pitch estimation.</p>
Full article ">Figure 12 Cont.
<p>Feature visualization with different convolutional layers. (<b>a</b>) First convolutional layer; (<b>b</b>) Second convolutional layer; (<b>c</b>) Third convolutional layer; (<b>d</b>) Fourth convolutional layer; (<b>e</b>) Convolutional layers for yaw estimation; (<b>f</b>) Convolutional layers for pitch estimation.</p>
Full article ">Figure 13
<p>Real images of the satellite.</p>
Full article ">Figure 14
<p>The imaging results at three different azimuth angles. (<b>a</b>) yaw: 0°; (<b>b</b>) yaw: 45°; (<b>c</b>) yaw: 75°; (<b>d</b>) 90°; (<b>e</b>) yaw: 100°; (<b>f</b>) yaw: 115°.</p>
Full article ">
21 pages, 2469 KiB  
Article
Fortify the Guardian, Not the Treasure: Resilient Adversarial Detectors
by Raz Lapid, Almog Dubin and Moshe Sipper
Mathematics 2024, 12(22), 3451; https://doi.org/10.3390/math12223451 - 5 Nov 2024
Viewed by 546
Abstract
Adaptive adversarial attacks, where adversaries tailor their strategies with full knowledge of defense mechanisms, pose significant challenges to the robustness of adversarial detectors. In this paper, we introduce RADAR (Robust Adversarial Detection via Adversarial Retraining), an approach designed to fortify adversarial detectors against [...] Read more.
Adaptive adversarial attacks, where adversaries tailor their strategies with full knowledge of defense mechanisms, pose significant challenges to the robustness of adversarial detectors. In this paper, we introduce RADAR (Robust Adversarial Detection via Adversarial Retraining), an approach designed to fortify adversarial detectors against such adaptive attacks while preserving the classifier’s accuracy. RADAR employs adversarial training by incorporating adversarial examples—crafted to deceive both the classifier and the detector—into the training process. This dual optimization enables the detector to learn and adapt to sophisticated attack scenarios. Comprehensive experiments on CIFAR-10, SVHN, and ImageNet datasets demonstrate that RADAR substantially enhances the detector’s ability to accurately identify adaptive adversarial attacks without degrading classifier performance. Full article
Show Figures

Figure 1

Figure 1
<p>General scheme of adversarial attacks. <span class="html-italic">x</span>: original image. <math display="inline"><semantics> <msubsup> <mi>x</mi> <mi mathvariant="monospace">adv</mi> <mo>′</mo> </msubsup> </semantics></math>: standard adversarial attack. <math display="inline"><semantics> <msubsup> <mi>x</mi> <mi mathvariant="monospace">adv</mi> <mrow> <mo>″</mo> </mrow> </msubsup> </semantics></math>: adaptive adversarial attack, targeting both <math display="inline"><semantics> <msub> <mi>f</mi> <mi>θ</mi> </msub> </semantics></math> (classifier) and <math display="inline"><semantics> <msub> <mi>g</mi> <mi>ϕ</mi> </msub> </semantics></math> (detector). The attacker’s goal is to fool the classifier into misclassifying the image and simultaneously deceive the detector into reporting the attack as benign (i.e., failing to detect the adversarial manipulation). The classifier <math display="inline"><semantics> <msub> <mi>f</mi> <mi>θ</mi> </msub> </semantics></math> and the detector <math display="inline"><semantics> <msub> <mi>g</mi> <mi>ϕ</mi> </msub> </semantics></math> share the same input but operate independently, with separate parameters and architectures. The classifier is trained to perform standard classification, while the detector is explicitly trained to identify adversarial instances.</p>
Full article ">Figure 2
<p>Overview of <b>RADAR</b>. (1) The process begins with the generation of adaptive adversarial instances, <math display="inline"><semantics> <msub> <mi>X</mi> <mi>adv</mi> </msub> </semantics></math>. (2) After completing the batch attack, we train the detector <math display="inline"><semantics> <msub> <mi>g</mi> <mi>ϕ</mi> </msub> </semantics></math> using both benign instances <math display="inline"><semantics> <msub> <mi>X</mi> <mi>ben</mi> </msub> </semantics></math> and adversarial instances <math display="inline"><semantics> <msub> <mi>X</mi> <mi>adv</mi> </msub> </semantics></math>. The <span class="html-fig-inline" id="mathematics-12-03451-i003"><img alt="Mathematics 12 03451 i003" src="/mathematics/mathematics-12-03451/article_deploy/html/images/mathematics-12-03451-i003.png"/></span> symbol refers to the models being frozen.</p>
Full article ">Figure 3
<p>Generalization performance of adversarially trained detectors trained on CIFAR-10, SVHN, and ImageNet. Each adversarial detector was trained using each corresponding classifier; e.g., ResNet-50 adversarial detector was trained using ResNet-50 image classifier. This table shows the generalization of each detector to other classifiers, which it did not train with. A value represents the ROC-AUC of the respective detector–classifier pair for OPGD (top row) and SPGD (bottom row) with <math display="inline"><semantics> <mrow> <mi>ϵ</mi> <mo>=</mo> <mstyle scriptlevel="0" displaystyle="true"> <mfrac> <mn>16</mn> <mn>255</mn> </mfrac> </mstyle> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Comparison of perturbations generated by PGD and OPGD across adversarial detectors. The red box contains the manipulated images and the corresponding perturbation over the adversarially trained detectors. Each row represents one model architecture, while the columns show the adversarial image generated by PGD, the corresponding perturbation from PGD, the adversarial image generated by OPGD, and the corresponding perturbation from OPGD. The original image is shown on the right for reference. Note: PGD only attacks a classifier, while OPGD attacks both a classifier and a detector.</p>
Full article ">Figure 5
<p>CIFAR-10. Binary cross-entropy loss metrics, from the point of view of an attacker, are presented here in the context of crafting an adversarial instance from the test set. These plots illustrate the progression of loss over 20 different images of orthogonal projected gradient descent (OPGD) with the main goal being to minimize the loss. The progression for each image is represented in a distinct color. Top: Prior to adversarial training, the loss converges to zero after a small number of iterations. Bottom: After adversarial training, the incurred losses are significantly higher by orders of magnitude (note the difference in scales) compared with those observed in their standard counterparts. This shows that the detector is now resilient, i.e., far harder to fool.</p>
Full article ">Figure 6
<p>SVHN. Binary cross-entropy loss metrics, from the point of view of an attacker, are presented here in the context of crafting an adversarial instance from the test set. The progression for each image is represented in a distinct color.</p>
Full article ">Figure 7
<p>ImageNet. Binary cross-entropy loss metrics, from the point of view of an attacker, are presented here in the context of crafting an adversarial instance from the test set using OPGD. The progression for each image is represented in a distinct color.</p>
Full article ">Figure 8
<p>AUC and SR@5 scores across different epsilon values for CIFAR-10, SVHN, and ImageNet datasets using OPGD. The performance of the adversarial detectors is illustrated, highlighting how AUC and SR@5 vary across different perturbation magnitudes.</p>
Full article ">Figure 9
<p>AUC and SR@5 scores across different epsilon values for CIFAR-10, SVHN, and ImageNet datasets using SPGD.</p>
Full article ">Figure 10
<p>Ablation studies were conducted using VGG-11, varying the (1) number of steps, (2) step size <math display="inline"><semantics> <mi>α</mi> </semantics></math>, (3) batch size, (4) learning rate, and (5) whether the model was pretrained through standard training, presented sequentially from left to right.</p>
Full article ">Figure 11
<p>Comparison of classification accuracies of models utilizing adversarial detectors trained with standard pretraining followed by adversarial training (solid lines) versus detectors trained exclusively with adversarial training (dashed lines) across CIFAR-10, SVHN, and ImageNet datasets. The plot demonstrates the impact of increasing <math display="inline"><semantics> <mi>ϵ</mi> </semantics></math> values on the accuracy of the classification models when using the respective adversarial detectors.</p>
Full article ">
26 pages, 3095 KiB  
Article
Joint Optimization Control Algorithm for Passive Multi-Sensors on Drones for Multi-Target Tracking
by Xin Guan, Yu Lu and Lang Ruan
Drones 2024, 8(11), 627; https://doi.org/10.3390/drones8110627 - 30 Oct 2024
Viewed by 302
Abstract
A distributed network of multiple unmanned aerial vehicles (UAVs) equipped with airborne passive bistatic radar (APBR) can form a passive detection network through cooperative networking technology, a novel passive early warning detection system. Its multi-target tracking performance has a significant impact on situational [...] Read more.
A distributed network of multiple unmanned aerial vehicles (UAVs) equipped with airborne passive bistatic radar (APBR) can form a passive detection network through cooperative networking technology, a novel passive early warning detection system. Its multi-target tracking performance has a significant impact on situational awareness of the detection area. This paper proposes a passive multi-sensors joint optimization control algorithm based on task adaptive switching, with the aim of addressing the impact of limited UAV sensors’ field of view (FOV) on multi-target tracking performance in APBR networks. Firstly, for a single UAV node, the Poisson Labeled Multi-Bernoulli (PLMB) filter is selected as the local filter of each node, with the objective of obtaining the local multi-target density independently. Subsequently, the consensus arithmetic average fusion rule is employed to address the multi-sensors density fusion problem in APBR networks. This enables the acquisition of the global multi-target density and multi-target tracks of the network. The task adaptive switching mechanism of the nodes is constructed further based on the partially observable Markov decision process (POMDP), and the objective functions for the UAV to perform the search task and the tracking task are derived based on differential entropy, respectively. Ultimately, a multi-node joint optimization control algorithm is devised. The simulation experiment demonstrates that the proposed algorithm is capable of effective control of multiple nodes to solve the multi-target search and tracking problem when the node FOV is limited. This further improves the multi-target tracking and fusion capability of the distributed APBR network. Full article
Show Figures

Figure 1

Figure 1
<p>An overview of the research core.</p>
Full article ">Figure 2
<p>Passive Multi-node Joint Optimization Control Flow.</p>
Full article ">Figure 3
<p>Scenario diagram of multi-target tracking simulation.</p>
Full article ">Figure 4
<p>Motion tracks of each node under control commands.</p>
Full article ">Figure 5
<p>GOSPA error profile of local filtering results and fusion results.</p>
Full article ">Figure 6
<p>OSPA<sup>(2)</sup> error plot for local filtering results and fusion results.</p>
Full article ">Figure 7
<p>Comparison of GOSPA errors of fusion results under different algorithms.</p>
Full article ">Figure 8
<p>Estimated target number of fusion results under different algorithms.</p>
Full article ">
18 pages, 3412 KiB  
Article
Using Adjoint-Based Forecast Sensitivity to Observation to Evaluate a Wind Profiler Data Assimilation Strategy and the Impact of Data on Short-Term Forecasts
by Cheng Wang, Xiang-Yu Huang, Min Chen, Yaodeng Chen, Jiqin Zhong and Jian Yin
Remote Sens. 2024, 16(21), 3964; https://doi.org/10.3390/rs16213964 - 25 Oct 2024
Viewed by 428
Abstract
A wind profiler radar detects fine spatiotemporal resolution dynamical information, enabling the capture of meso- and micro-scale systems. Experience gained from observing system experiments (OSEs) studies confirms that reasonable profiler assimilation techniques can achieve improved short-term forecasts. This study further applies the adjoint-based [...] Read more.
A wind profiler radar detects fine spatiotemporal resolution dynamical information, enabling the capture of meso- and micro-scale systems. Experience gained from observing system experiments (OSEs) studies confirms that reasonable profiler assimilation techniques can achieve improved short-term forecasts. This study further applies the adjoint-based forecast sensitivity to observation (FSO) method to investigate the quantitative impact of a profiler data assimilation strategy on short-term forecasts, and the results are consistent with those obtained from OSEs, further demonstrating that FSO and OSEs can be used to evaluate the effect of data assimilation techniques from different perspectives. Considering the unique advantage that the FSO can quantify the interactions between various observing systems and the impact on improving the model forecasts according to specific needs without costly additional calculations, we further diagnose in detail the observation impacts from multiple perspectives, including the observation platform, observation variables, and spatial distribution. And the results show that dynamical variables are more significant in improving forecasts compared to the other observed variables. Meanwhile, the dense profiler observations resulted in a more significant impact when radiosonde observations were not detected. The upper-level single winds monitored by profiler radars play a more important role in improving forecast skill. The FSO method measures the impact of an individual observing system, which can be used to enrich the evaluation of data assimilation schemes, efficiently calculate the impacts of multisource observations, and contribute to future development in adaptive observation, observation quality control, and observation error optimization. Full article
Show Figures

Figure 1

Figure 1
<p>Flowchart of CMA-BJ_FSO system computation.</p>
Full article ">Figure 2
<p>Domains and observations used in CMA-BJ system at 0000 UTC on 15 July 2021.</p>
Full article ">Figure 3
<p>Averaged 6 h forecast error reduction due to the assimilation of (<b>a</b>) eastward and northward velocity observations and (<b>b</b>) profiler observations to the 6 h forecast using different profiler assimilation schemes in the North China region for 0000 UTC 1 July–1800 UTC 31 July 2021; unit: J kg<sup>−1</sup>.</p>
Full article ">Figure 4
<p>Time series of profiler observation impacts for a total of 124 cycling assimilation and forecasts during 0000 UTC 1 July–1800 UTC 31 July 2021; unit: J kg<sup>−1</sup>.</p>
Full article ">Figure 5
<p>Characterization of the time-averaged diurnal variation in the sensitivity of the 6 h forecast errors to profiler observations for 0000 UTC 1 July−--1800 UTC 31 July 2021: (<b>a</b>) 0000 UTC, (<b>b</b>) 0600 UTC, (<b>c</b>) 1200 UTC, and (<b>d</b>) 1800 UTC. Unit: J kg<sup>−1</sup>.</p>
Full article ">Figure 6
<p>Time-averaged observation impact by different observational variables for 0000 UTC 1 July–1800 UTC 31 July 2021; unit: J kg<sup>−1</sup>.</p>
Full article ">Figure 7
<p>Time-averaged observation impact by multisource observing platforms for 0000 UTC 1 July–1800 UTC 31 July 2021; unit: J kg<sup>−1</sup>.</p>
Full article ">Figure 8
<p>Time-averaged observation impact by multisource observing platforms valid at 0000 UTC (blue), 0600 UTC (green), 1200 UTC (yellow), and 1800 UTC (red); unit: J kg<sup>−1</sup>.</p>
Full article ">Figure 9
<p>Horizontal distributions of profiler impact on 6 h forecast errors; unit: J kg<sup>−1</sup>.</p>
Full article ">Figure 10
<p>Impact of profilers on 6 h forecast error reduction at different vertical levels: (<b>a</b>) time averaged; (<b>b</b>) time averaged and normalized (i.e., normalized by the number of single wind observations at each vertical level). Unit: J kg<sup>−1</sup>.</p>
Full article ">
17 pages, 2700 KiB  
Article
Receiving Paths Improvement of Digital Phased Array Antennas Using Adaptive Dynamic Range
by Xuan Luong Nguyen, Thanh Thuy Dang Thi, Phung Bao Nguyen and Viet Hung Tran
Electronics 2024, 13(21), 4161; https://doi.org/10.3390/electronics13214161 - 23 Oct 2024
Viewed by 558
Abstract
In contemporary radar technology, the observation and detection of objects with low radar cross-sections remains a significant challenge. A multi-functional radar model employing a digital phased array antenna system offers notable advantages over traditional radar in addressing this issue. Nonetheless, to fully capitalize [...] Read more.
In contemporary radar technology, the observation and detection of objects with low radar cross-sections remains a significant challenge. A multi-functional radar model employing a digital phased array antenna system offers notable advantages over traditional radar in addressing this issue. Nonetheless, to fully capitalize on these benefits, improving the structure of the receiving path in digital transceiver modules is crucial. A method for improving the digital receiving path model by implementing a matched filter approach is introduced. Given that the return signals from objects are often lower than the internal noise, the analog part of the digital transceiver modules must ensure that its dynamic range aligns with the level of this noise and the weak signal. The output signal level of the analog part must correspond to the allowable input range of the analog-to-digital converter. Improvements in the receiving path to achieve a fully matched model can reduce errors in the phase parameters and amplitudes of the useful signal at the output. The simulation results presented in this paper demonstrate a reduction in amplitude error by approximately 1 dB and a phase error exceeding 1.5 degrees for the desired signal at the output of each receiving path. Consequently, these improvements are expected to enhance the overall quality and efficiency of the spatial and temporal accumulation processes in the digital phased array antenna system. Furthermore, to maintain the matched filter model, we also propose incorporating an adaptive “pseudo-expansion” of the linear gain range. This involves adding a feedback stage with an automatic and adaptive bias voltage adjustment for the intermediate-frequency preamplifier in the analog part of the receiving path. Simulations to qualitatively verify the validity of this proposal are conducted using data from practical operational radar system models. Full article
Show Figures

Figure 1

Figure 1
<p>The general structure of the digital receiving model. f<sub>0</sub>—carrier frequency; LNA—low-noise amplifier; LO—local oscillator; CLK—clock controller signal; ADC—analog-to-digital converter; SPI<sub>ADC</sub>—serial peripheral interface of ADC; DSP—digital signal processor; SPI<sub>DSP</sub>—serial peripheral interface of DSP; and D<sub>out</sub>—digital signal at the output.</p>
Full article ">Figure 2
<p>The fundamental structure of the receiving path in the DTM. QDM—quadrature demodulation; LNA—low-noise amplifier; IF Amp—intermediate-frequency (IF) amplifier.</p>
Full article ">Figure 3
<p>Determining the dynamic range for the analog part of the receiving path (<b>a</b>), and the adaptive adjustment of the operating point position (<b>b</b>).</p>
Full article ">Figure 4
<p>The model proposed for improving the structure of the analog part in the DTM. M&amp;CU—microprocessor and control unit; OVGU—offset voltage generation unit; LPF—low-pass filter.</p>
Full article ">Figure 5
<p>The general structure of a DTM system using quadrature modulation after the improvement in the receiving path. <span class="html-italic">QDM</span>—quadrature demodulation; <span class="html-italic">LNA</span>—low-noise amplifier; <span class="html-italic">IF preamp—</span>IF preamplifier.</p>
Full article ">Figure 6
<p>The simulation procedure.</p>
Full article ">Figure 7
<p>The additive mixture simulation results of signals modulated by phase-shift keying code with 1024 positions and internal noise: (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>τ</mi> </mrow> <mrow> <mi>s</mi> <mi>i</mi> <mi>g</mi> <mi>n</mi> <mi>a</mi> <mi>l</mi> </mrow> </msub> <mo>/</mo> <msub> <mrow> <mi>τ</mi> </mrow> <mrow> <mi>s</mi> <mi>u</mi> <mi>b</mi> <mo>−</mo> <mi>p</mi> <mi>u</mi> <mi>l</mi> </mrow> </msub> <mo>=</mo> <mn>1024</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <msub> <mrow> <mi>U</mi> </mrow> <mrow> <mi>s</mi> <mi>i</mi> <mi>g</mi> <mi>n</mi> <mi>a</mi> <mi>l</mi> </mrow> </msub> <mo>/</mo> <msub> <mrow> <mi>σ</mi> </mrow> <mrow> <mi>n</mi> <mi>o</mi> <mi>i</mi> <mi>s</mi> <mi>e</mi> </mrow> </msub> <mo>~</mo> <mn>0.1</mn> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>τ</mi> </mrow> <mrow> <mi>s</mi> <mi>i</mi> <mi>g</mi> <mi>n</mi> <mi>a</mi> <mi>l</mi> </mrow> </msub> <mo>/</mo> <msub> <mrow> <mi>τ</mi> </mrow> <mrow> <mi>s</mi> <mi>u</mi> <mi>b</mi> <mo>−</mo> <mi>p</mi> <mi>u</mi> <mi>l</mi> </mrow> </msub> <mo>=</mo> <mn>1024</mn> <mo>,</mo> <mo> </mo> <msub> <mrow> <mi>U</mi> </mrow> <mrow> <mi>s</mi> <mi>i</mi> <mi>g</mi> <mi>n</mi> <mi>a</mi> <mi>l</mi> </mrow> </msub> <mo>/</mo> <msub> <mrow> <mi>σ</mi> </mrow> <mrow> <mi>n</mi> <mi>o</mi> <mi>i</mi> <mi>s</mi> <mi>e</mi> </mrow> </msub> <mo>~</mo> <mn>0.05</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>The amplitude (<b>a</b>) and phase errors (<b>b</b>) of the return signals before and after the improvement in the receiving path in the DTM.</p>
Full article ">Figure 9
<p>The simulation results of signal accumulation when M ~ 6 × 10<sup>3</sup> before and after the improvement in the receiving path in the DTM: (<b>a</b>,<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>U</mi> </mrow> <mrow> <mi>s</mi> <mi>i</mi> <mi>g</mi> <mi>n</mi> <mi>a</mi> <mi>l</mi> </mrow> </msub> <mo>/</mo> <msub> <mrow> <mi>σ</mi> </mrow> <mrow> <mi>n</mi> <mi>o</mi> <mi>i</mi> <mi>s</mi> <mi>e</mi> </mrow> </msub> <mo>~</mo> <mn>0.1</mn> </mrow> </semantics></math>; (<b>c</b>,<b>d</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>U</mi> </mrow> <mrow> <mi>s</mi> <mi>i</mi> <mi>g</mi> <mi>n</mi> <mi>a</mi> <mi>l</mi> </mrow> </msub> <mo>/</mo> <msub> <mrow> <mi>σ</mi> </mrow> <mrow> <mi>n</mi> <mi>o</mi> <mi>i</mi> <mi>s</mi> <mi>e</mi> </mrow> </msub> <mo>~</mo> <mn>0.05</mn> </mrow> </semantics></math>.</p>
Full article ">
21 pages, 2024 KiB  
Article
Robust Truncated Statistics Constant False Alarm Rate Detection of UAVs Based on Neural Networks
by Wei Dong and Weidong Zhang
Drones 2024, 8(10), 597; https://doi.org/10.3390/drones8100597 - 18 Oct 2024
Viewed by 395
Abstract
With the rapid popularity of unmanned aerial vehicles (UAVs), airspace safety is facing tougher challenges, especially for the identification of non-cooperative target UAVs. As a vital approach for non-cooperative target identification, radar signal processing has attracted continuous and extensive attention and research. The [...] Read more.
With the rapid popularity of unmanned aerial vehicles (UAVs), airspace safety is facing tougher challenges, especially for the identification of non-cooperative target UAVs. As a vital approach for non-cooperative target identification, radar signal processing has attracted continuous and extensive attention and research. The constant false alarm rate (CFAR) detector is widely used in most current radar systems. However, the detection performance will sharply deteriorate in complex and dynamical environments. In this paper, a novel truncated statistics- and neural network-based CFAR (TSNN-CFAR) algorithm is developed. Specifically, we adopt a right truncated Rayleigh distribution model combined with the characteristics of pattern recognition using a neural network. In the simulation environments of four different backgrounds, the proposed algorithm does not need guard cells and outperforms the traditional mean level (ML) and ordered statistics (OS) CFAR algorithms. Especially in high-density target and clutter edge environments, since utilizing 19 statistics obtained from the numerical calculation of two reference windows as the input characteristics, the TSNN-CFAR algorithm has the best adaptive decision ability, accurate background clutter modeling, stable false alarm regulation property and superior detection performance. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of ML-CFAR.</p>
Full article ">Figure 2
<p>A case with no guard cells: (<b>a</b>) Homogeneous environment. (<b>b</b>) High-density target environment. (<b>c</b>,<b>d</b>) Low-power area of clutter edge. (<b>e</b>,<b>f</b>) High-power area of clutter edge.</p>
Full article ">Figure 3
<p>Schematic diagram of TSNN-CFAR.</p>
Full article ">Figure 4
<p>NN structure of TSNN-CFAR.</p>
Full article ">Figure 5
<p>Confusion matrices of (<b>a</b>) NN-CFAR and (<b>b</b>) TSNN-CFAR on datasets.</p>
Full article ">Figure 6
<p>Single-target detection result in the homogeneous environment.</p>
Full article ">Figure 7
<p>Category judgment result in the homogeneous environment.</p>
Full article ">Figure 8
<p>Detection probability result in the homogeneous environment.</p>
Full article ">Figure 9
<p>Corresponding local enlarged result for <a href="#drones-08-00597-f008" class="html-fig">Figure 8</a>.</p>
Full article ">Figure 10
<p>Multi-target detection result in the high-density target environment.</p>
Full article ">Figure 11
<p>Category judgment result in the high-density target environment.</p>
Full article ">Figure 12
<p>Detection probability result with an interference target in the high-density target environment.</p>
Full article ">Figure 13
<p>Corresponding local enlarged result for <a href="#drones-08-00597-f012" class="html-fig">Figure 12</a>.</p>
Full article ">Figure 14
<p>Detection probability result with 2 interference targets located in both sides of reference windows in the high-density target environment.</p>
Full article ">Figure 15
<p>Corresponding local enlarged result for <a href="#drones-08-00597-f014" class="html-fig">Figure 14</a>.</p>
Full article ">Figure 16
<p>Detection result in the clutter edge situation.</p>
Full article ">Figure 17
<p>Corresponding local enlarged result for <a href="#drones-08-00597-f016" class="html-fig">Figure 16</a>.</p>
Full article ">Figure 18
<p>Category judgment result in the clutter edge situation.</p>
Full article ">Figure 19
<p>False alarm probability in the clutter edge situation.</p>
Full article ">Figure 20
<p>Score of the robustness for different CFAR algorithms.</p>
Full article ">
34 pages, 8862 KiB  
Article
A Novel Detection Transformer Framework for Ship Detection in Synthetic Aperture Radar Imagery Using Advanced Feature Fusion and Polarimetric Techniques
by Mahmoud Ahmed, Naser El-Sheimy and Henry Leung
Remote Sens. 2024, 16(20), 3877; https://doi.org/10.3390/rs16203877 - 18 Oct 2024
Viewed by 793
Abstract
Ship detection in synthetic aperture radar (SAR) imagery faces significant challenges due to the limitations of traditional methods, such as convolutional neural network (CNN) and anchor-based matching approaches, which struggle with accurately detecting smaller targets as well as adapting to varying environmental conditions. [...] Read more.
Ship detection in synthetic aperture radar (SAR) imagery faces significant challenges due to the limitations of traditional methods, such as convolutional neural network (CNN) and anchor-based matching approaches, which struggle with accurately detecting smaller targets as well as adapting to varying environmental conditions. These methods, relying on either intensity values or single-target characteristics, often fail to enhance the signal-to-clutter ratio (SCR) and are prone to false detections due to environmental factors. To address these issues, a novel framework is introduced that leverages the detection transformer (DETR) model along with advanced feature fusion techniques to enhance ship detection. This feature enhancement DETR (FEDETR) module manages clutter and improves feature extraction through preprocessing techniques such as filtering, denoising, and applying maximum and median pooling with various kernel sizes. Furthermore, it combines metrics like the line spread function (LSF), peak signal-to-noise ratio (PSNR), and F1 score to predict optimal pooling configurations and thus enhance edge sharpness, image fidelity, and detection accuracy. Complementing this, the weighted feature fusion (WFF) module integrates polarimetric SAR (PolSAR) methods such as Pauli decomposition, coherence matrix analysis, and feature volume and helix scattering (Fvh) components decomposition, along with FEDETR attention maps, to provide detailed radar scattering insights that enhance ship response characterization. Finally, by integrating wave polarization properties, the ability to distinguish and characterize targets is augmented, thereby improving SCR and facilitating the detection of weakly scattered targets in SAR imagery. Overall, this new framework significantly boosts DETR’s performance, offering a robust solution for maritime surveillance and security. Full article
(This article belongs to the Special Issue Target Detection with Fully-Polarized Radar)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Flowchart of the proposed ship detection in SAR imagery.</p>
Full article ">Figure 2
<p>CNN preprocessing model.</p>
Full article ">Figure 3
<p>DETR pipeline overview [<a href="#B52-remotesensing-16-03877" class="html-bibr">52</a>].</p>
Full article ">Figure 4
<p>Performance of FEDETR for two images from the test datasets SSDD and SAR Ship, including Gaofen-3 (<b>a1</b>–<b>a8</b>) and Sentinel-1 images (<b>b1</b>–<b>b8</b>) with different polarizations and resolutions. The ground truths, detection results, the false detection and missed detection results are indicated with green, red, yellow, and blue boxes, respectively.</p>
Full article ">Figure 5
<p>Experimental results for ship detection in SAR images across four distinct regions: Onshore1, Onshore2, Offshore1, and Offshore2. (<b>a</b>) are the ground truth images; (<b>b</b>–<b>e</b>) are the detection results for DETR using VV and VH (DETR_VV, DETR_VH) as well as FEDETR using VV and VH (FEDETR_VV, FEDETR_VH) polarizations, respectively. Ground truths, detection results, false detection results, and missed detection results are marked with green, red, yellow, and blue boxes.</p>
Full article ">Figure 6
<p>Experimental results for ship detection in SAR images across four regions: Onshore1, Onshore2, Offshore1, and Offshore2. (<b>a</b>) are the ground truth images and (<b>b</b>,<b>c</b>) are the predicted results from FEDETR with optimal pooling and kernel size and the WFF method, respectively. Ground truths, detection results, false detections, and missed detections are marked with green, red, yellow, and blue boxes, respectively.</p>
Full article ">Figure 7
<p>Correlation matrix analyzing the relationship between kernel Size, LSF, and PSNR for max pooling (<b>a</b>) and median pooling (<b>b</b>) on SSD and SAR Ship datasets. Validation of FEDETR module effectiveness.</p>
Full article ">Figure 8
<p>Depicts the LSF of images with different types of pooling and kernel sizes. Panels (<b>a1</b>–<b>a4</b>) depict LSF images after max pooling, while panels (<b>a5</b>–<b>a8</b>) show LSF images after median pooling with kernel sizes 3, 5, 7, and 9 respectively for Gaofen-3 HH images from the SAR Ship dataset. Panels (<b>b1</b>–<b>b4</b>) illustrate LSF images after max pooling and panels (<b>b5</b>–<b>b8</b>) show LSF images after median pooling for images from the SSD dataset.</p>
Full article ">Figure 9
<p>Backscattering intensity in VV and VH polarizations and ship presence across four regions. (<b>a1</b>,<b>a2</b>) Backscattering intensity in VV and VH polarizations for Onshore1; (<b>a3</b>,<b>a4</b>) backscattering intensity for ships in Onshore1; (<b>b1</b>,<b>b2</b>) backscattering intensity in VV and VH polarizations for Onshore2; (<b>b3</b>,<b>b4</b>) backscattering intensity for ships in Onshore2; (<b>c1</b>,<b>c2</b>) backscattering intensity in VV and VH polarizations for Offshore1; (<b>c3</b>,<b>c4</b>) backscattering intensity for ships in Offshore1; (<b>d1</b>,<b>d2</b>) backscattering intensity in VV and VH polarizations for Offshore2; and (<b>d3</b>,<b>d4</b>) backscattering intensity for ships in Offshore2. In each subfigure, the x-axis represents pixel intensity, and the y-axis represents frequency.</p>
Full article ">Figure 10
<p>LSF and PSNR Comparisons for Onshore and Offshore Areas (Onshore1 (<b>a</b>,<b>b</b>), Onshore2 (<b>c</b>,<b>d</b>), Offshore1 (<b>e</b>,<b>f</b>), Offshore2 (<b>g</b>,<b>h</b>)) Using VV and VH Polarization with Median and Max Pooling.</p>
Full article ">Figure 11
<p>Visual comparison of max and median pooling with different kernel sizes on onshore and offshore SAR imagery for VV and VH polarizations: (<b>a1</b>,<b>a2</b>) Onshore1 VV (max kernel size 3; median kernel size 3); (<b>a3</b>,<b>a4</b>) Onshore1 VV (median kernel size 5); (<b>b1</b>,<b>b2</b>) Onshore2 VV (max kernel size 3); (<b>b3</b>,<b>b4</b>) Onshore2 VH (median kernel size 5); (<b>c1</b>,<b>c2</b>) Offshore1 VV (max kernel size 7; median kernel size 7); (<b>c3</b>,<b>c4</b>) Offshore1 VH (max kernel size 3; median kernel size 3); (<b>d1</b>,<b>d2</b>) Offshore2 VV (max kernel size 5; median kernel size 5); (<b>d3</b>,<b>d4</b>) Offshore2 VH (max kernel size 5; median kernel size 5).</p>
Full article ">Figure 12
<p>Experimental results for ship detection in SAR images across four regions: (<b>a</b>) Onshore1, (<b>b</b>) Onshore2, (<b>c</b>) Offshore1, and (<b>d</b>) Offshore2. The figure illustrates the effectiveness of the Pauli decomposition method in reducing noise and distinguishing ships from the background. Ships are marked in pink, while noise clutter is shown in green.</p>
Full article ">Figure 13
<p>Signal-to-clutter ratio (SCR) comparisons for different polarizations across various scenarios. VV polarization is in blue, VH polarization in orange, and Fvh in green.</p>
Full article ">Figure 14
<p>Otsu’s thresholding on four regions for Pauli and FVH images: (<b>a1</b>–<b>a4</b>) thresholding for Onshore1, Onshore2, Offshore1, and Offshore2 for Pauli images; (<b>b1</b>–<b>b4</b>) thresholding for the same regions for Fvh images.</p>
Full article ">Figure 15
<p>Visualization of FEDETR attention maps, Pauli decomposition, Fvh feature maps, and WFF results for Onshore1 (<b>a1</b>–<b>a4</b>), Onshore2 (<b>b1</b>–<b>b4</b>), Offshore1 (<b>c1</b>–<b>c4</b>), and Offshore2 (<b>d1</b>–<b>d4</b>).</p>
Full article ">
14 pages, 26814 KiB  
Article
A Radar-Based Methodology for Aerosol Plume Identification and Characterisation on the South African Highveld
by Gerhardt Botha, Roelof Petrus Burger and Henno Havenga
Atmosphere 2024, 15(10), 1201; https://doi.org/10.3390/atmos15101201 - 8 Oct 2024
Viewed by 477
Abstract
Biomass burning on the South African Highveld annually injects substantial amounts of aerosols and trace gases into the atmosphere, impacting the global radiative balance, cloud microphysics, and regional air quality. These aerosols are transported as plumes over long distances, posing challenges to existing [...] Read more.
Biomass burning on the South African Highveld annually injects substantial amounts of aerosols and trace gases into the atmosphere, impacting the global radiative balance, cloud microphysics, and regional air quality. These aerosols are transported as plumes over long distances, posing challenges to existing in situ and satellite-based monitoring techniques because of their limited spatial and temporal resolution, particularly in environments with low-level sources. This study aims to develop and validate a novel radar-based methodology to detect, track, and characterise aerosol plumes, addressing the limitations of existing in situ and satellite monitoring techniques. Using high-resolution volumetric reflectivity data from an S-band radar in Pretoria, South Africa, a traditional storm tracking algorithm is adapted to improve plume identification. Case studies of plume events in June and August 2013 demonstrate the radar’s effectiveness in distinguishing lower vertical profiles and reduced reflectivity of plumes compared with storm echoes. The adapted algorithm successfully tracked the spatial and temporal evolution of the plumes, revealing their short-lived nature. Results indicate that radar-derived geospatial characteristics have the potential to contribute significantly to understanding the impacts of plumes on local air quality. These findings underscore the critical need for high spatio-temporal resolution data to support effective air quality management and inform policy development in regions affected by biomass burning. Full article
(This article belongs to the Special Issue Applications of Meteorological Radars in the Atmosphere)
Show Figures

Figure 1

Figure 1
<p>Flowchart of the plume identification and tracking methodology employed in this study.</p>
Full article ">Figure 2
<p>Study domain indicating the minimum CAPPI level required to avoid radar beam blockage from topography at different elevations.</p>
Full article ">Figure 3
<p>Radar beam propagation for beams at different elevation angles, illustrating the beam height as a function of range accounting for atmospheric refraction and the curvature of the Earth.</p>
Full article ">Figure 4
<p>Radar reflectivity field for a plume event at a 2 km CAPPI level at 11:35 UTC on 19 June 2013 (<b>a</b>) and convective storm event at a 3 km CAPPI level on 28 November 2013 (<b>b</b>) with the vertical distribution of dBZ at a cross section (right).</p>
Full article ">Figure 5
<p>MODIS Aqua true-colour-corrected surface reflectance and VIIRS thermal anomalies for plume events on the Highveld on 19 June 2013. The thermal anomalies are coloured according to acquisition time, with the corresponding time indicated in the legend.</p>
Full article ">Figure 6
<p>MODIS Aqua true-colour-corrected surface reflectance and VIIRS thermal anomalies for widespread plume events on the Highveld on 29 August 2014, which were outside the maximum detectable range of the radar. The thermal anomalies are coloured according to acquisition time, with the corresponding time indicated in the legend.</p>
Full article ">Figure 7
<p>Spatial time series of plume growth with maximum reflectivity for a plume event on 1 June 2013.</p>
Full article ">Figure 8
<p>Analysis of a plume event on 26 August 2016 in Mpumalanga province, Highveld. The left panel displays a time-series graph of PM<sub>10</sub> and PM<sub>2.5</sub> concentrations (in µg/m<sup>3</sup>) at two monitoring stations, Phola and eMalahleni. The right panel shows a map of the plume’s trajectory and origin. The combined data illustrate the impact of the plume on ambient particulate matter concentrations in the region.</p>
Full article ">
19 pages, 7060 KiB  
Article
A Comparison between Radar Variables and Hail Pads for a Twenty-Year Period
by Tomeu Rigo and Carme Farnell
Climate 2024, 12(10), 158; https://doi.org/10.3390/cli12100158 - 4 Oct 2024
Viewed by 857
Abstract
The time and spatial variability of hail events limit the capability of diagnosing the occurrence and stones’ size in thunderstorms using weather radars. The bibliography presents multiple variables and methods with different pros and cons. The studied area, the Lleida Plain, is annually [...] Read more.
The time and spatial variability of hail events limit the capability of diagnosing the occurrence and stones’ size in thunderstorms using weather radars. The bibliography presents multiple variables and methods with different pros and cons. The studied area, the Lleida Plain, is annually hit by different hailstorms, which have a high impact on the agricultural sector. A rectangular distributed hail pad network in this plain has worked operationally since 2000 to provide information regarding different aspects of hail impact. Since 2002, the Servei Meteorològic de Catalunya (SMC) has operated a single-pol C-band weather radar network that volumetrically covers the region of interest. During these years, the SMC staff has been working on improving the capability of detecting hail, adapting some parameters and searching for thresholds that help to identify the occurrence and size of the stones in thunderstorms. The current research analyzes a twenty-year period (2004–2023) to provide a good picture of the hailstorms occurring in the region of interest. The main research result is that VIL (Vertically Integrated Liquid) density is a better indicator for hailstone size than VIL, which presents more uncertainty in discriminating different hail categories. Full article
(This article belongs to the Special Issue Applications of Smart Technologies in Climate Risk and Adaptation)
Show Figures

Figure 1

Figure 1
<p><b>Top left</b>: Map of Western Europe. The area included in the rectangle is the region of study. <b>Bottom right</b>: Zoom in on the region of interest. The dots indicate the location of the radars and the circles indicate the 50 (dots) and 100 (straight) km range for each radar. The red shaded area marks the region covered by the hail pad network. “LMI”, “CDV”, “PBE”, and “PDA” indicate the locations of the radars of La Miranda, Creu del Vent, Puig Bernat, and Puig d’Arques, respectively.</p>
Full article ">Figure 2
<p>Example of hit hail pad corresponding to the event of 29 August 2023. The image has been filtered to highlight the impacts over the plaque. The striped rectangle in the middle of the pad corresponds to the calibration area (see [<a href="#B19-climate-12-00158" class="html-bibr">19</a>] for more information).</p>
Full article ">Figure 3
<p>(<b>A</b>) CAPPI at 3 km height at 17.12 UTC on 28 July 2028. The black arrow line shows the cross section segment shown in panel (<b>B</b>). (<b>B</b>) Cross section of the thunderstorm over the region of interest at the same time as panel (<b>A</b>). (<b>C</b>) Maximum VIL field for the whole day of 28 July 2028. The dots indicate the maximum hail size registered by the different hail pads. (<b>D</b>) Same as panel (<b>C</b>), but for the maximum VIL density field.</p>
Full article ">Figure 4
<p><b>Top</b>: each dot corresponds to an analyzed hail pad during the event of 5 July 2012. <b>Below</b>: normalized coordinates of the same points centered in (0, 0).</p>
Full article ">Figure 5
<p><b>Top</b>: Hail size distribution (in logarithm) for the whole dataset of hail pad registers. <b>Bottom</b>: Linear fitting of the distribution.</p>
Full article ">Figure 6
<p>From top to bottom: distribution of graupel (grey), hail (blue), and severe hail (red) for the week of the year (<b>A</b>), the month (<b>B</b>), the year (<b>C</b>), and the maximum daily surface temperature (<b>D</b>).</p>
Full article ">Figure 7
<p>As in <a href="#climate-12-00158-f006" class="html-fig">Figure 6</a>, but for the percentage distribution (only for cases with plaques with impacts). (<b>A</b>–<b>C</b>) panels correspond to weekly of the year, monthly, and yearly distributions, respectively.</p>
Full article ">Figure 8
<p>Box plots of the VIL for the four hail categories detected in the hail pads (from left to right: no hail, graupel, hail and severe hail).</p>
Full article ">Figure 9
<p>Same as <a href="#climate-12-00158-f008" class="html-fig">Figure 8</a>, but for VIL density.</p>
Full article ">Figure 10
<p>VIL (<b>top</b>) and VIL density (<b>bottom</b>) distributions for all the four categories (black: no hail; grey: graupel; blue: hail; and red: severe hail).</p>
Full article ">Figure 11
<p>Total sample of hail records (dots) for 2000–2023 (dark grey for null, grey for graupel, cyan for hail, and orange for severe hail). The dashed lines correspond to the 10th percentile of occurrence for each category (black for null, green for graupel, blue for hail, and red for severe hail), showing the usual behavior of the hailfall in the region concerning the center of the event.</p>
Full article ">Figure 12
<p>Graupel (<b>A</b>), hail (<b>B</b>), and severe hail (<b>C</b>) spatial distributions estimated using maximum daily VIL fields for 2013–2023. Dotted, dashed, and straight lines indicate the 10th, 50th, and 90th ground observations percentiles.</p>
Full article ">Figure 13
<p>Same as <a href="#climate-12-00158-f012" class="html-fig">Figure 12</a>, but for VIL density (Graupel, hail, and severe hail in panels (<b>A</b>–<b>C</b>), respectively).</p>
Full article ">
27 pages, 5540 KiB  
Article
Marine Radar Constant False Alarm Rate Detection in Generalized Extreme Value Distribution Based on Space-Time Adaptive Filtering Clutter Statistical Analysis
by Baotian Wen, Zhizhong Lu and Bowen Zhou
Remote Sens. 2024, 16(19), 3691; https://doi.org/10.3390/rs16193691 - 3 Oct 2024
Viewed by 544
Abstract
The performance of marine radar constant false alarm rate (CFAR) detection method is significantly influenced by the modeling of sea clutter distribution and detector decision rules. The false alarm rate and detection rate are therefore unstable. In order to address low CFAR detection [...] Read more.
The performance of marine radar constant false alarm rate (CFAR) detection method is significantly influenced by the modeling of sea clutter distribution and detector decision rules. The false alarm rate and detection rate are therefore unstable. In order to address low CFAR detection performance and the modeling problem of non-uniform, non-Gaussian, and non-stationary sea clutter distribution in marine radar images, in this paper, a CFAR detection method in generalized extreme value distribution modeling based on marine radar space-time filtering background clutter is proposed. Initially, three-dimensional (3D) frequency wave-number (space-time) domain adaptive filter is employed to filter the original radar image, so as to obtain uniform and stable background clutter. Subsequently, generalized extreme value (GEV) distribution is introduced to integrally model the filtered background clutter. Finally, Inclusion/Exclusion (IE) with the best performance under the GEV distribution is selected as the clutter range profile CFAR (CRP-CFAR) detector decision rule in the final detection. The proposed method is verified by utilizing real marine radar image data. The results indicate that when the Pfa is set at 0.0001, the proposed method exhibits an average improvement in PD of 2.3% compared to STAF-RCBD-CFAR, and a 6.2% improvement compared to STCS-WL-CFAR. When the Pfa is set at 0.001, the proposed method exhibits an average improvement in PD of 6.9% compared to STAF-RCBD-CFAR, and a 9.6% improvement compared to STCS-WL-CFAR. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Interpolated original radar range-azimuth map: (<b>a</b>) The first set data (wave height 2.65 m). (<b>b</b>) The second set data (wave height 2.83 m). (<b>c</b>) The third set data (wave height 3.18 m).</p>
Full article ">Figure 2
<p>The background clutter statistical distribution in the original radar range-azimuth map: (<b>a</b>) PDF distribution for the whole area, near area, and far area under the three datasets. (<b>b</b>) CDF distribution for the whole area, near area, and far area under the three datasets.</p>
Full article ">Figure 3
<p>The processed radar range-azimuth map. (<b>a</b>) The first set data (wave height 2.65 m). (<b>b</b>) The second set data (wave height 2.83 m). (<b>c</b>) The third set data (wave height 3.18 m).</p>
Full article ">Figure 4
<p>The statistical background clutter distribution in the processed range-azimuth map: (<b>a</b>) PDF distribution for the whole area, near area, and far area under the three datasets. (<b>b</b>) CDF distribution for the whole area, near area, and far area under the three datasets.</p>
Full article ">Figure 5
<p>PDF and CDF plots: the processed data, estimated Weibull, K, Log-normal, KK, WW, Generalized Pareto, and Generalized Extreme Value distribution. (<b>a</b>,<b>b</b>) The first set data (wave height 2.65 m). (<b>c</b>,<b>d</b>) The second set data (wave height 2.83 m). (<b>e</b>,<b>f</b>) The third set data (wave height 3.18 m).</p>
Full article ">Figure 6
<p>The errors of PDF and CDF: estimated Weibull, K, Log-normal, KK, WW, Generalized Pareto, and Generalized Extreme Value. (<b>a</b>,<b>b</b>) The first set data (wave height 2.65 m). (<b>c</b>,<b>d</b>) The second set data (wave height 2.83 m). (<b>e</b>,<b>f</b>) The third set data (wave height 3.18 m).</p>
Full article ">Figure 6 Cont.
<p>The errors of PDF and CDF: estimated Weibull, K, Log-normal, KK, WW, Generalized Pareto, and Generalized Extreme Value. (<b>a</b>,<b>b</b>) The first set data (wave height 2.65 m). (<b>c</b>,<b>d</b>) The second set data (wave height 2.83 m). (<b>e</b>,<b>f</b>) The third set data (wave height 3.18 m).</p>
Full article ">Figure 7
<p>Schematic diagram of CRP-CFAR detection principle.</p>
Full article ">Figure 8
<p>The extracted real moving target data: (<b>a</b>) The target in the 1st image of the sequence. (<b>b</b>) The target in the 15th image of the sequence. (<b>c</b>) The target in the 32nd image of the sequence.</p>
Full article ">Figure 9
<p>The detection results of seven detectors in first datasets: (<b>a</b>) Truth image. (<b>b</b>) OS-CFAR. (<b>c</b>) TMOS-CFAR. (<b>d</b>) GMOS-CFAR. (<b>e</b>) WH-CFAR. (<b>f</b>) WHOS-CFAR. (<b>g</b>) IE-CFAR. (<b>h</b>) LOGT-CFAR.</p>
Full article ">Figure 10
<p>The detection results of seven detectors in the second datasets: (<b>a</b>) Truth image. (<b>b</b>) OS-CFAR. (<b>c</b>) TMOS-CFAR. (<b>d</b>) GMOS-CFAR. (<b>e</b>) WH-CFAR. (<b>f</b>) WHOS-CFAR. (<b>g</b>) IE-CFAR. (<b>h</b>) LOGT-CFAR.</p>
Full article ">Figure 11
<p>The detection results of seven detectors in the third datasets: (<b>a</b>) Truth image. (<b>b</b>) OS-CFAR. (<b>c</b>) TMOS-CFAR. (<b>d</b>) GMOS-CFAR. (<b>e</b>) WH-CFAR. (<b>f</b>) WHOS-CFAR. (<b>g</b>) IE-CFAR. (<b>h</b>) LOGT-CFAR.</p>
Full article ">Figure 12
<p>The relation curve between PD and SCR of the seven detectors under the generalized extreme value distribution, <math display="inline"><semantics> <mrow> <mo form="prefix">Pfa</mo> <mo>=</mo> <mn>0.0001</mn> </mrow> </semantics></math>. (<b>a</b>) The first set data. (<b>b</b>) The second set data. (<b>c</b>) The third set data. (<b>d</b>) 100 datasets average.</p>
Full article ">Figure 13
<p>Structure flow diagram of STAF-GEV-IE-CFAR.</p>
Full article ">Figure 14
<p>The detection results of five methods at <math display="inline"><semantics> <mrow> <mi>SCR</mi> <mo>=</mo> <mn>4</mn> <mspace width="3.33333pt"/> <mi>dB</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mo form="prefix">Pfa</mo> <mo>=</mo> <mn>0.0001</mn> </mrow> </semantics></math>: (<b>a</b>) STAF-GEV-IE-CFAR. (<b>b</b>) STCS-WL-CFAR. (<b>c</b>) EMD-CFAR. (<b>d</b>) STAF-RCBD-CFAR. (<b>e</b>) IE-CFAR. (<b>f</b>) KGLRTD.</p>
Full article ">Figure 15
<p>The detection results of five methods at <math display="inline"><semantics> <mrow> <mi>SCR</mi> <mo>=</mo> <mn>0</mn> <mspace width="3.33333pt"/> <mi>dB</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mo form="prefix">Pfa</mo> <mo>=</mo> <mn>0.0001</mn> </mrow> </semantics></math>: (<b>a</b>) STAF-GEV-IE-CFAR. (<b>b</b>) STCS-WL-CFAR. (<b>c</b>) EMD-CFAR. (<b>d</b>) STAF-RCBD-CFAR. (<b>e</b>) IE-CFAR. (<b>f</b>) KGLRTD.</p>
Full article ">Figure 16
<p>The detection results of five methods at <math display="inline"><semantics> <mrow> <mi>SCR</mi> <mo>=</mo> <mn>2</mn> <mspace width="3.33333pt"/> <mi>dB</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mo form="prefix">Pfa</mo> <mo>=</mo> <mn>0.0001</mn> </mrow> </semantics></math>: (<b>a</b>) STAF-GEV-IE-CFAR. (<b>b</b>) STCS-WL-CFAR. (<b>c</b>) EMD-CFAR. (<b>d</b>) STAF-RCBD-CFAR. (<b>e</b>) IE-CFAR. (<b>f</b>) KGLRTD.</p>
Full article ">Figure 17
<p>The detection results of five methods at <math display="inline"><semantics> <mrow> <mi>SCR</mi> <mo>=</mo> <mn>6</mn> <mspace width="3.33333pt"/> <mi>dB</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mo form="prefix">Pfa</mo> <mo>=</mo> <mn>0.0001</mn> </mrow> </semantics></math>: (<b>a</b>) STAF-GEV-IE-CFAR. (<b>b</b>) STCS-WL-CFAR. (<b>c</b>) EMD-CFAR. (<b>d</b>) STAF-RCBD-CFAR. (<b>e</b>) IE-CFAR. (<b>f</b>) KGLRTD.</p>
Full article ">Figure 18
<p>Comparison of ROC curves of different methods: (<b>a</b>) SCR = 2 dB. (<b>b</b>) SCR = 4 dB. (<b>c</b>) SCR = 6 dB.</p>
Full article ">Figure 19
<p>Comparison of detection performances of different methods: (<b>a</b>) Pfa = 0.0001. (<b>b</b>) Pfa = 0.001.</p>
Full article ">
22 pages, 21022 KiB  
Article
Ego-Vehicle Speed Correction for Automotive Radar Systems Using Convolutional Neural Networks
by Sunghoon Moon, Daehyun Kim and Younglok Kim
Sensors 2024, 24(19), 6409; https://doi.org/10.3390/s24196409 - 3 Oct 2024
Viewed by 2838
Abstract
The development of autonomous driving vehicles has increased the global demand for robust and efficient automotive radar systems. This study proposes an automotive radar-based ego-vehicle speed detection network (AVSD Net) model using convolutional neural networks for estimating the speed of the ego vehicle. [...] Read more.
The development of autonomous driving vehicles has increased the global demand for robust and efficient automotive radar systems. This study proposes an automotive radar-based ego-vehicle speed detection network (AVSD Net) model using convolutional neural networks for estimating the speed of the ego vehicle. The preprocessing and postprocessing methods used for vehicle speed correction are presented in detail. The AVSD Net model exhibits characteristics that are independent of the angular performance of the radar system and its mounting angle on the vehicle, thereby reducing the loss of the maximum detection range without requiring a downward or wide beam for the elevation angle. The ego-vehicle speed is effectively estimated when the range–velocity spectrum data are input into the model. Moreover, preprocessing and postprocessing facilitate an accurate correction of the ego-vehicle speed while reducing the complexity of the model, enabling its application to embedded systems. The proposed ego-vehicle speed correction method can improve safety in various applications, such as autonomous emergency braking systems, forward collision avoidance assist, adaptive cruise control, rear cross-traffic alert, and blind spot detection systems. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

Figure 1
<p>Configuration of the vehicle and radar system: (<b>a</b>) photograph; (<b>b</b>) schematic.</p>
Full article ">Figure 2
<p>Waveform of the automotive radar system.</p>
Full article ">Figure 3
<p>Measurement results at an ego-vehicle speed of 17.5 m/s. The dashed and solid red lines represent the left and right guardrails, respectively: (<b>a</b>) Video acquired from the webcam; (<b>b</b>) range-velocity (RV) spectrum data acquired from the automotive radar.</p>
Full article ">Figure 4
<p>Variation in the radial velocity based on the position of objects: (<b>a</b>) automotive radar coordinate system; (<b>b</b>) radial velocity according to position x of the stationary objects.</p>
Full article ">Figure 5
<p>Ego-vehicle speed obtained from the global positioning system (GPS) and controller area network (CAN).</p>
Full article ">Figure 6
<p>Position of the target and antenna.</p>
Full article ">Figure 7
<p>Schematic of the simulation method.</p>
Full article ">Figure 8
<p>Comparison of results: (<b>a</b>) measurement data; (<b>b</b>) simulation data.</p>
Full article ">Figure 9
<p>Preprocessing flowchart: (<b>a</b>) measurement data; (<b>b</b>) simulation data.</p>
Full article ">Figure 10
<p>Automotive radar-based ego-vehicle speed detection (AVSD) block.</p>
Full article ">Figure 11
<p>Architecture of the automotive radar-based ego-vehicle speed detection network (AVSD Net)-84k.</p>
Full article ">Figure 12
<p>Architecture of AVSD Net-717.</p>
Full article ">Figure 13
<p>Postprocessing flowchart.</p>
Full article ">Figure 14
<p>Method followed for ego-vehicle speed correction.</p>
Full article ">Figure 15
<p>Results obtained from the AVSD Net model: (<b>a</b>) training error rate; (<b>b</b>) testing error rate.</p>
Full article ">Figure 16
<p>Speed ratio results after postprocessing: (<b>a</b>) AVSD Net-84k; (<b>b</b>) AVSD Net-38k; (<b>c</b>) AVSD Net-12k; (<b>d</b>) AVSD Net-3k; (<b>e</b>) AVSD Net-1k; (<b>f</b>) AVSD Net-717.</p>
Full article ">Figure 17
<p>Speed ratio results after postprocessing of AVSD Net-717 for various window sizes: (<b>a</b>) 150; (<b>b</b>) 100; (<b>c</b>) 50; (<b>d</b>) 25.</p>
Full article ">Figure 18
<p>Rate of abnormal value with respect to window size.</p>
Full article ">
21 pages, 13186 KiB  
Article
Ship Contour Extraction from Polarimetric SAR Images Based on Polarization Modulation
by Guoqing Wu, Shengbin Luo Wang, Yibin Liu, Ping Wang and Yongzhen Li
Remote Sens. 2024, 16(19), 3669; https://doi.org/10.3390/rs16193669 - 1 Oct 2024
Viewed by 718
Abstract
Ship contour extraction is vital for extracting the geometric features of ships, providing comprehensive information essential for ship recognition. The main factors affecting the contour extraction performance are speckle noise and amplitude inhomogeneity, which can lead to over-segmentation and missed detection of ship [...] Read more.
Ship contour extraction is vital for extracting the geometric features of ships, providing comprehensive information essential for ship recognition. The main factors affecting the contour extraction performance are speckle noise and amplitude inhomogeneity, which can lead to over-segmentation and missed detection of ship edges. Polarimetric synthetic aperture radar (PolSAR) images contain rich target scattering information. Under different transmitting and receiving polarization, the amplitude and phase of pixels can be different, which provides the potential to meet the uniform requirement. This paper proposes a novel ship contour extraction framework from PolSAR images based on polarization modulation. Firstly, the image is partitioned into the foreground and background using a super-pixel unsupervised clustering approach. Subsequently, an optimization criterion for target amplitude modulation to achieve uniformity is designed. Finally, the ship’s contour is extracted from the optimized image using an edge-detection operator and an adaptive edge extraction algorithm. Based on the contour, the geometric features of ships are extracted. Moreover, a PolSAR ship contour extraction dataset is established using Gaofen-3 PolSAR images, combined with expert knowledge and automatic identification system (AIS) data. With this dataset, we compare the accuracy of contour extraction and geometric features with state-of-the-art methods. The average errors of extracted length and width are reduced to 20.09 m and 8.96 m. The results demonstrate that the proposed method performs well in both accuracy and precision. Full article
(This article belongs to the Special Issue SAR Images Processing and Analysis (2nd Edition))
Show Figures

Figure 1

Figure 1
<p>The ship segmentation result of the SSDD dataset: (<b>a</b>) Ground truth of PSeg No. 421. (<b>b</b>) Segmentation result. (<b>c</b>) Ground-truth of PSeg No. 328. (<b>d</b>) Segmentation result.</p>
Full article ">Figure 2
<p>The procedure of amplitude approximation method.</p>
Full article ">Figure 3
<p>The Gaofen-3 ship chips and polarization modulation results for a single target: (<b>a</b>) 2D image of HH. (<b>b</b>) 3D image of HH. (<b>c</b>) 2D image of optimization on the target area. (<b>d</b>) 3D image of optimization on the target area. (<b>e</b>) 2D image of joint optimization. (<b>f</b>) 3D image of joint optimization. (<b>g</b>) 2D image of amplitude approximation. (<b>h</b>) 3D image of amplitude approximation.</p>
Full article ">Figure 3 Cont.
<p>The Gaofen-3 ship chips and polarization modulation results for a single target: (<b>a</b>) 2D image of HH. (<b>b</b>) 3D image of HH. (<b>c</b>) 2D image of optimization on the target area. (<b>d</b>) 3D image of optimization on the target area. (<b>e</b>) 2D image of joint optimization. (<b>f</b>) 3D image of joint optimization. (<b>g</b>) 2D image of amplitude approximation. (<b>h</b>) 3D image of amplitude approximation.</p>
Full article ">Figure 4
<p>The Gaofen-3 ship chips and polarization modulation results for multiple targets: (<b>a</b>) 2D image of HH. (<b>b</b>) 3D image of HH. (<b>c</b>) 2D image of optimization on the target area. (<b>d</b>) 3D image of optimization on the target area. (<b>e</b>) 2D image of joint optimization. (<b>f</b>) 3D image of joint optimization. (<b>g</b>) 2D image of amplitude approximation. (<b>h</b>) 3D image of amplitude approximation.</p>
Full article ">Figure 5
<p>The procedure of contour extraction algorithm of PolSAR images.</p>
Full article ">Figure 6
<p>Superpixel segmentation results and Superpixel-based foreground–background classification results: (<b>a</b>) Superpixel segmentation results; (<b>b</b>) Superpixel-based foreground–background classification results; (<b>c</b>) Amplitude distribution of foreground and background superpixels.</p>
Full article ">Figure 7
<p>Schematic diagram of dual threshold polarization modulation.</p>
Full article ">Figure 8
<p>Edge strength extracted by ROEWA operator before and after image enhancement: (<b>a</b>) Edge strength of original image (HH); (<b>b</b>) Edge strength of optimized image.</p>
Full article ">Figure 9
<p>Flowchart of adaptive contour extraction method.</p>
Full article ">Figure 10
<p>Contour extraction method of adaptive clustering: (<b>a</b>) Edge strength of original image; (<b>b</b>) NMS result; (<b>c</b>) Cluttering result (k = 2); (<b>d</b>) Cluttering result (k = 3); (<b>e</b>) Strong edge; (<b>f</b>) Final contour.</p>
Full article ">Figure 11
<p>The result of ellipse fitting and schematic of ellipse parameters: (<b>a</b>) The ellipse fitting result; (<b>b</b>) The parameters of ellipse.</p>
Full article ">Figure 12
<p>The optical images of the selected dataset and the PolSAR images with labels: (<b>a</b>) The optical image of data No. 1; (<b>b</b>) The optical image of data No. 2; (<b>c</b>) The optical image of data No. 3; (<b>d</b>) The labeled image of data No. 1; (<b>e</b>) The labeled image of data No. 2; (<b>f</b>) The labeled image of data No. 3.</p>
Full article ">Figure 13
<p>Contour extraction results of single-target PolSAR images: (<b>a</b>–<b>d</b>) are the intensity of HH, HV, VV, and polarization modulation image, respectively. (<b>e</b>–<b>h</b>) are the edge-strength map of HH, HV, VV, and polarization modulation, respectively. (<b>i</b>–<b>l</b>) are the contour results of HH, HV, VV, and polarization modulation, respectively.</p>
Full article ">Figure 14
<p>Contour extraction results of multi-target PolSAR images: (<b>a</b>–<b>d</b>) are the intensity of HH, HV, VV, and polarization modulation image, respectively. (<b>e</b>–<b>h</b>) are the edge-strength map of HH, HV, VV, and polarization modulation, respectively. (<b>i</b>–<b>l</b>) are the contour results of HH, HV, VV, and polarization modulation, respectively.</p>
Full article ">Figure 15
<p>Detection results at different IoU thresholds.</p>
Full article ">Figure 16
<p>The results of ship contour and ellipse fitting with different images: (<b>a</b>–<b>e</b>) are the fitting results of HH, HV, VV, SPAN, and polarization modulation images, respectively.</p>
Full article ">Figure 17
<p>Ship size extraction results: (<b>a</b>–<b>c</b>) are the extraction results of length, width, and orientation, respectively.</p>
Full article ">
20 pages, 3755 KiB  
Article
Multidirectional Attention Fusion Network for SAR Change Detection
by Lingling Li, Qiong Liu, Guojin Cao, Licheng Jiao, Fang Liu, Xu Liu and Puhua Chen
Remote Sens. 2024, 16(19), 3590; https://doi.org/10.3390/rs16193590 - 26 Sep 2024
Viewed by 624
Abstract
Synthetic Aperture Radar (SAR) imaging is essential for monitoring geomorphic changes, urban transformations, and natural disasters. However, the inherent complexities of SAR, particularly pronounced speckle noise, often lead to numerous false detections. To address these challenges, we propose the Multidirectional Attention Fusion Network [...] Read more.
Synthetic Aperture Radar (SAR) imaging is essential for monitoring geomorphic changes, urban transformations, and natural disasters. However, the inherent complexities of SAR, particularly pronounced speckle noise, often lead to numerous false detections. To address these challenges, we propose the Multidirectional Attention Fusion Network (MDAF-Net), an advanced framework that significantly enhances image quality and detection accuracy. Firstly, we introduce the Multidirectional Filter (MF), which employs side-window filtering techniques and eight directional filters. This approach supports multidirectional image processing, effectively suppressing speckle noise and precisely preserving edge details. By utilizing deep neural network components, such as average pooling, the MF dynamically adapts to different noise patterns and textures, thereby enhancing image clarity and contrast. Building on this innovation, MDAF-Net integrates multidirectional feature learning with a multiscale self-attention mechanism. This design utilizes local edge information for robust noise suppression and combines global and local contextual data, enhancing the model’s contextual understanding and adaptability across various scenarios. Rigorous testing on six SAR datasets demonstrated that MDAF-Net achieves superior detection accuracy compared with other methods. On average, the Kappa coefficient improved by approximately 1.14%, substantially reducing errors and enhancing change detection precision. Full article
(This article belongs to the Special Issue Remote Sensing: 15th Anniversary)
Show Figures

Figure 1

Figure 1
<p>Multidirectional Attention Fusion Network Framework diagram.</p>
Full article ">Figure 2
<p>MDAF module structure diagram. This structure essentially combines convolution to denoise, enhance, and maintain learnability in the feature layer.</p>
Full article ">Figure 3
<p>Multidirectional filter window design.</p>
Full article ">Figure 4
<p>Feature layer visualization using various filtering methods. Each group (<b>a</b>–<b>f</b>) shows original features, mean filtering, median filtering, and multidirectional filtering results.</p>
Full article ">Figure 5
<p>MSSA structure diagram.The MSSA module integrates global and local information.</p>
Full article ">Figure 6
<p>This figure presents geographical datasets from various extracted regions and river systems. The datasets include Farmland-A, Farmland-B, and the Inland River originating from the Yellow River Estuary, as well as datasets from the Ottawa River, Red River, and Jialu River. Specifically, (<b>a</b>) illustrates the data corresponding to Time Phase 1, (<b>b</b>) displays the data for Time Phase 2, and (<b>c</b>) serves as a reference change map for comparison.</p>
Full article ">Figure 7
<p>Dual-temporal SAR images of Yellow River estuary dataset. (<b>a</b>) Taken on 18 June 2008, (<b>b</b>) Taken on 19 June 2009.</p>
Full article ">Figure 8
<p>Parameter variation indicators for different image patch sizes.</p>
Full article ">Figure 9
<p>Prediction results of MDAF-Net and comparison methods on Farmland-A, Farmland-B and Inland River datasets.</p>
Full article ">Figure 10
<p>Prediction results of MDAF-Net and comparison methods on Ottawa dataset.</p>
Full article ">Figure 11
<p>Prediction results of MDAF-Net and comparison methods on Red River Dataset.</p>
Full article ">Figure 12
<p>Prediction results of MDAF-Net and comparison methods on Jialu River dataset.</p>
Full article ">
21 pages, 4077 KiB  
Article
Analysis of Advanced Driver-Assistance Systems for Safe and Comfortable Driving of Motor Vehicles
by Tomasz Neumann
Sensors 2024, 24(19), 6223; https://doi.org/10.3390/s24196223 - 26 Sep 2024
Viewed by 1886
Abstract
This paper aims to thoroughly examine and compare advanced driver-assistance systems (ADASs) in the context of their impact on safety and driving comfort. It also sought to determine the level of acceptance and trust drivers have in these systems. The first chapter of [...] Read more.
This paper aims to thoroughly examine and compare advanced driver-assistance systems (ADASs) in the context of their impact on safety and driving comfort. It also sought to determine the level of acceptance and trust drivers have in these systems. The first chapter of this document describes the sensory detectors used in ADASs, including radars, cameras, LiDAR, and ultrasonics. The subsequent chapter presents the most popular driver assistance systems, including adaptive cruise control (ACC), blind spot detection (BSD), lane keeping systems (LDW/LKS), intelligent headlamp control (IHC), and emergency brake assist (EBA). A key element of this work is the evaluation of the effectiveness of these systems in terms of safety and driving comfort, employing a survey conducted among drivers. Data analysis illustrates how these systems are perceived and identified areas requiring improvements. Overall, the paper shows drivers’ positive reception of ADASs, with most respondents confirming that these technologies increase their sense of safety and driving comfort. These systems prove to be particularly helpful in avoiding accidents and hazardous situations. However, there is a need for their further development, especially in terms of increasing their precision, reducing false alarms, and improving the user interface. ADASs significantly contribute to enhancing safety and driving comfort. Yet, they are still in development and require continuous optimization and driver education to fully harness their potential. Technological advancements are expected to make these systems even more effective and user-friendly. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

Figure 1
<p>Location of ADASs in the vehicle [<a href="#B11-sensors-24-06223" class="html-bibr">11</a>].</p>
Full article ">Figure 2
<p>Vehicle speed adjustment system in cruise control (ACC) [<a href="#B22-sensors-24-06223" class="html-bibr">22</a>].</p>
Full article ">Figure 3
<p>Blind spot detection (BSD) [<a href="#B27-sensors-24-06223" class="html-bibr">27</a>].</p>
Full article ">Figure 4
<p>Belt maintenance system—LDW/LKS [<a href="#B27-sensors-24-06223" class="html-bibr">27</a>].</p>
Full article ">Figure 5
<p>Intelligent headlamp control (IHC) [<a href="#B31-sensors-24-06223" class="html-bibr">31</a>].</p>
Full article ">Figure 6
<p>Emergency brake assist [<a href="#B32-sensors-24-06223" class="html-bibr">32</a>].</p>
Full article ">Figure 7
<p>How often respondents use ADASs while driving.</p>
Full article ">Figure 8
<p>How do respondents feel safe when driving with ADAS systems?</p>
Full article ">Figure 9
<p>The impact of ADASs on driving comfort according to respondents.</p>
Full article ">
22 pages, 104930 KiB  
Article
A Complex Background SAR Ship Target Detection Method Based on Fusion Tensor and Cross-Domain Adversarial Learning
by Haopeng Chan, Xiaolan Qiu, Xin Gao and Dongdong Lu
Remote Sens. 2024, 16(18), 3492; https://doi.org/10.3390/rs16183492 - 20 Sep 2024
Viewed by 682
Abstract
Synthetic Aperture Radar (SAR) ship target detection has been extensively researched. However, most methods use the same dataset division for both training and validation. In practical applications, it is often necessary to quickly adapt to new loads, new modes, and new data to [...] Read more.
Synthetic Aperture Radar (SAR) ship target detection has been extensively researched. However, most methods use the same dataset division for both training and validation. In practical applications, it is often necessary to quickly adapt to new loads, new modes, and new data to detect targets effectively. This presents a cross-domain detection problem that requires further study. This paper proposes a method for detecting SAR ships in complex backgrounds using fusion tensor and cross-domain adversarial learning. The method is designed to address the cross-domain detection problem of SAR ships with large differences between the training and test sets. Specifically, it can be used for the cross-domain detection task from the fully polarised medium-resolution ship dataset (source domain) to the high-resolution single-polarised dataset (target domain). This method proposes a channel fusion module (CFM) based on the YOLOV5s model. The CFM utilises the correlation between polarised channel images during training to enrich the feature information of single-polarised images extracted by the model during inference. This article proposes a module called the cross-domain adversarial learning module (CALM) to reduce overfitting and achieve adaptation between domains. Additionally, this paper introduces the anti-interference head (AIH) which decouples the detection head to reduce the conflict of classification and localisation problems. This improves the anti-interference and generalisation ability in complex backgrounds. This paper conducts cross-domain experiments using the constructed medium-resolution SAR full polarisation dataset (SFPD) as the source domain and the high-resolution single-polarised ship detection dataset (HRSID) as the target domain. Compared to the best-performing YOLOV8s model among typical mainstream models, this model improves precision by 4.9%, recall by 3.3%, AP by 2.4%, and F1 by 3.9%. This verifies the effectiveness of the method and provides a useful reference for improving cross-domain learning and model generalisation capability in the field of target detection. Full article
Show Figures

Figure 1

Figure 1
<p>Four polarised images of a scene.</p>
Full article ">Figure 2
<p>A complex background SAR ship target detection method based on fusion tensor and cross-domain adversarial learning.</p>
Full article ">Figure 3
<p>Diagram of CALM model.</p>
Full article ">Figure 4
<p>Schematic representation of the cross-domain adversarial learning process.</p>
Full article ">Figure 5
<p>Diagram of AIH model.</p>
Full article ">Figure 6
<p>Selected images from the HRSID dataset are shown.</p>
Full article ">Figure 7
<p>Selected images from the SFPD dataset are shown.</p>
Full article ">Figure 8
<p>(<b>a</b>) Thermal map of target position distribution for the SFPD dataset. (<b>b</b>) Thermal map of target aspect distribution for the SFPD dataset.</p>
Full article ">Figure 9
<p>Plot of the results of the multi-model comparison experiment.</p>
Full article ">Figure 10
<p>CALM experiment results: (<b>a</b>) Ground Truth image; (<b>b</b>) baseline model detection image; (<b>c</b>) YOLOV5 + CALM detection image.</p>
Full article ">Figure 11
<p>Line graph of the change in loss for the CALM experimental validation set.</p>
Full article ">Figure 12
<p>CFM experiment results: (<b>a</b>) Ground Truth image; (<b>b</b>) YOLOV5 + CALM detection image; (<b>c</b>) YOLOV5 + CALM + CFM detection image.</p>
Full article ">Figure 13
<p>Line graph of the change in loss for the CFM experimental validation set.</p>
Full article ">Figure 14
<p>AIH experiment results: (<b>a</b>) Ground Truth image; (<b>b</b>) YOLOV5 + CALM + CFM detection image; (<b>c</b>) YOLOV5 + CALM + CFM + AIH detection image.</p>
Full article ">Figure 15
<p>Line graph of the change in loss for the AIH experimental validation set.</p>
Full article ">
Back to TopTop