Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (120)

Search Parameters:
Keywords = multiple echo detection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 454 KiB  
Article
Dual-Function Radar Communications: A Secure Optimization Approach Using Partial Group Successive Interference Cancellation
by Mengqiu Chai, Shengjie Zhao and Yuan Liu
Remote Sens. 2025, 17(3), 364; https://doi.org/10.3390/rs17030364 - 22 Jan 2025
Viewed by 254
Abstract
As one of the promising technologies of 6G, dual-function radar communication (DFRC) integrates communication and radar sensing networks. However, with the application and deployment of DFRC, its security problem has become a significantly important issue. In this paper, we consider the physical layer [...] Read more.
As one of the promising technologies of 6G, dual-function radar communication (DFRC) integrates communication and radar sensing networks. However, with the application and deployment of DFRC, its security problem has become a significantly important issue. In this paper, we consider the physical layer security of a DFRC system where the base station communicates with multiple legitimate users and simultaneously detects the sensing target of interest. The sensing target is also a potential eavesdropper wiretapping the secure transmission. To this end, we proposed a secure design based on partial group successive interference cancellation through fully leveraging the split messages and partially decoding to improve the rate increment of legitimate users. In order to maximize the radar echo signal-to-noise ratio (SNR), we formulate an optimization problem of beamforming and consider introducing new variables and relaxing the problem to solve the non-convexity of the problem. Then, we propose a joint secure beamforming and rate optimization algorithm to solve the problem. Simulation results demonstrate the effectiveness of our design in improving the sensing and secrecy performance of the considered DFRC system. Full article
Show Figures

Figure 1

Figure 1
<p>A secure DFRC system with <span class="html-italic">K</span> legitimate users and a sensing target.</p>
Full article ">Figure 2
<p>The radar echo SNR of the secure DFRC system.</p>
Full article ">Figure 3
<p>Comparison of the sum secrecy rates under different <span class="html-italic">L</span> and <math display="inline"><semantics> <mi>μ</mi> </semantics></math> with GRI.</p>
Full article ">Figure 4
<p>Comparison of the sum secrecy rates under different <span class="html-italic">L</span> and <math display="inline"><semantics> <mi>μ</mi> </semantics></math> with IRI.</p>
Full article ">Figure 5
<p>The transmission rates and eavesdropping rates for individual users with GRI.</p>
Full article ">Figure 6
<p>The transmission rates and eavesdropping rates for individual users with IRI.</p>
Full article ">Figure 7
<p>The secrecy rate performance under different numbers of legitimate users.</p>
Full article ">Figure 8
<p>Convergence performance of the optimization process.</p>
Full article ">
16 pages, 2573 KiB  
Article
A Novel Temperature Drift Compensation Algorithm for Liquid-Level Measurement Systems
by Shanglong Li, Wanjia Gao and Wenyi Liu
Micromachines 2025, 16(1), 24; https://doi.org/10.3390/mi16010024 - 27 Dec 2024
Viewed by 427
Abstract
Aiming at the problem that ultrasonic detection is greatly affected by temperature drift, this paper investigates a novel temperature compensation algorithm. Ultrasonic impedance-based liquid-level measurement is a crucial non-contact, non-destructive technique. However, temperature drift can severely affect the accuracy of experimental measurements based [...] Read more.
Aiming at the problem that ultrasonic detection is greatly affected by temperature drift, this paper investigates a novel temperature compensation algorithm. Ultrasonic impedance-based liquid-level measurement is a crucial non-contact, non-destructive technique. However, temperature drift can severely affect the accuracy of experimental measurements based on this technology. Theoretical analysis and experimental research on temperature drift phenomena are conducted in this study, accompanied by the proposal of a new compensation algorithm. Leveraging an external fixed-point liquid-level detection system experimental platform, the impact of temperature drift on ultrasonic echo energy and actual liquid-level height is examined. Experimental results demonstrate that temperature drift affects the speed and attenuation of ultrasonic waves, leading to decreased accuracy in measuring liquid levels. The proposed temperature compensation method yields an average relative error of 3.427%. The error range spans from 0.03 cm to 0.336 cm. The average relative error reduces by 21.535% compared with before compensation, showcasing its applicability across multiple temperature conditions and its significance in enhancing the accuracy of ultrasonic-based measurements. Full article
Show Figures

Figure 1

Figure 1
<p>Liquid-level measuring system: (<b>a</b>) Experimental platform design diagram.; (<b>b</b>) Experimental platform design diagram.</p>
Full article ">Figure 2
<p>The relationship between ultrasonic echo energy and liquid-level height at 20 °C.</p>
Full article ">Figure 3
<p>The relationship between ultrasonic echo energy and liquid-level height at different temperatures.</p>
Full article ">Figure 4
<p>The relationship between ultrasonic echo energy and temperature.</p>
Full article ">Figure 5
<p>Temperature compensation model.</p>
Full article ">Figure 6
<p>Compare the compensated liquid-level height with the original liquid-level height.</p>
Full article ">Figure 7
<p>The relative error after compensation.</p>
Full article ">
21 pages, 7791 KiB  
Article
Simulation Study on Detection and Localization of a Moving Target Under Reverberation in Deep Water
by Jincong Dun, Shihong Zhou, Yubo Qi and Changpeng Liu
J. Mar. Sci. Eng. 2024, 12(12), 2360; https://doi.org/10.3390/jmse12122360 - 22 Dec 2024
Viewed by 456
Abstract
Deep-water reverberation caused by multiple reflections from the seafloor and sea surface can affect the performance of active sonars. To detect a moving target under reverberation conditions, a reverberation suppression method using multipath Doppler shift in deep water and wideband ambiguity function (WAF) [...] Read more.
Deep-water reverberation caused by multiple reflections from the seafloor and sea surface can affect the performance of active sonars. To detect a moving target under reverberation conditions, a reverberation suppression method using multipath Doppler shift in deep water and wideband ambiguity function (WAF) is proposed. Firstly, the multipath Doppler factors in the deep-water direct zone are analyzed, and they are introduced into the target scattered sound field to obtain the echo of the moving target. The mesh method is used to simulate the deep-water reverberation waveform in time domain. Then, a simulation model for an active sonar based on the source and short vertical line array is established. Reverberation and target echo in the received signal can be separated in the Doppler shift domain of the WAF. The multipath Doppler shifts in the echo are used to estimate the multipath arrival angles, which can be used for target localization. The simulation model and the reverberation suppression detection method can provide theoretical support and a technical reference for the active detection of moving targets in deep water. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

Figure 1
<p>Transmission and reception models for the edges of the transmitted pulse. (<b>a</b>) The front edge. (<b>b</b>) The back edge.</p>
Full article ">Figure 2
<p>Schematic of the simplified calculation.</p>
Full article ">Figure 3
<p>Theoretical and calculated values of Doppler factors for different paths as a function of the horizontal distance between the sonar and the target. (<b>a</b>) DD. (<b>b</b>) DSR. (<b>c</b>) SRSR.</p>
Full article ">Figure 4
<p>Schematic of the mesh method.</p>
Full article ">Figure 5
<p>Flowchart of the detection and localization algorithm for moving targets in deep water.</p>
Full article ">Figure 6
<p>WAF of the transmitted signal.</p>
Full article ">Figure 7
<p>Schematic of the sound field.</p>
Full article ">Figure 8
<p>Waveform of the received signal.</p>
Full article ">Figure 9
<p>WAF of the received signal.</p>
Full article ">Figure 10
<p>WAF as a function of time delay and Doppler shift. (<b>a</b>) A 2D pseudo-color map. (<b>b</b>) A color plot zooming in the echoes.</p>
Full article ">Figure 11
<p>Image of the analytical signal <math display="inline"><semantics> <mrow> <mi>y</mi> <mo stretchy="false">(</mo> <mi>τ</mi> <mo>,</mo> <mi>θ</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 12
<p>The contour plots of (<b>a</b>) the difference in time delay between the target echoes of the DD path and the DSR path, and (<b>b</b>) the arrival angle of echoes of the DD path.</p>
Full article ">Figure 13
<p>Cost function for the joint estimation of the target’s depth and range. The label “∗” indicates the true target position.</p>
Full article ">Figure 14
<p>(<b>a</b>) RERE and (<b>b</b>) REDE as functions of the target depth. The red lines represent the zero-error lines, while the dots lines denote the error curves.</p>
Full article ">Figure 15
<p>(<b>a</b>) RERE and (<b>b</b>) REDE as functions of the target range. The red lines represent the zero-error lines, while the dots lines denote the error curves.</p>
Full article ">Figure 16
<p>(<b>a</b>) RERE and (<b>b</b>) REDE as functions of the target velocity. The red lines represent the zero-error lines, while the dots lines denote the error curves.</p>
Full article ">Figure 17
<p>PCL as a function of SRR.</p>
Full article ">
21 pages, 6412 KiB  
Article
Detection of Flight Target via Multistatic Radar Based on Geosynchronous Orbit Satellite Irradiation
by Jia Dong, Peng Liu, Bingnan Wang and Yaqiu Jin
Remote Sens. 2024, 16(23), 4582; https://doi.org/10.3390/rs16234582 - 6 Dec 2024
Viewed by 557
Abstract
As a special microwave detection system, multistatic radar has obvious advantages in covert operation, anti-jamming, and anti-stealth due to its configuration of spatial diversity. As a high-orbit irradiation source, a geosynchronous orbit satellite (GEO) has the advantages of a low revisit period, large [...] Read more.
As a special microwave detection system, multistatic radar has obvious advantages in covert operation, anti-jamming, and anti-stealth due to its configuration of spatial diversity. As a high-orbit irradiation source, a geosynchronous orbit satellite (GEO) has the advantages of a low revisit period, large beam coverage area, and stable power of ground beam compared with traditional passive radar irradiation sources. This paper focuses on the key technologies of flight target detection in multistatic radar based on geosynchronous orbit satellite irradiation with one transmitter and multiple receivers. We carry out the following work: Firstly, we aim to address the problems of low signal-to-noise ratio (SNR) and range cell migration of high-speed cruise targets. The Radon–Fourier transform constant false alarm rate detector-range cell migration correction (RFT-CFAR-RCMC) is adopted to realize the coherent integration of echoes with range cell migration correction (RCM) and Doppler phase compensation. It significantly improves the SNR. Furthermore, we utilize the staggered PRF to solve the ambiguity and obtain multi-view data. Secondly, based on the aforementioned target multi-view detection data, the linear least square (LLS) multistatic positioning method combining bistatic range positioning (BR) and time difference of arrival positioning (TDOA) is used, which constructs the BR and TDOA measurement equations and linearizes by mathematical transformation. The measurement equations are solved by the LLS method, and the target positioning and velocity inversion are realized by the fusion of multistatic data. Finally, using target positioning data as observation values of radar, the Kalman filter (KF) is used to achieve flight trajectory tracking. Numerical simulation verifies the effectiveness of the proposed process. Full article
Show Figures

Figure 1

Figure 1
<p>Spatial configuration of GEO-Airship bistatic radar.</p>
Full article ">Figure 2
<p>Maximum detection range of bistatic radar.</p>
Full article ">Figure 3
<p>Geometric configuration of GEO-Airships multistatic radar.</p>
Full article ">Figure 4
<p>Diagrammatic drawing of RFT.</p>
Full article ">Figure 5
<p>Algorithm flow chart of 2D-CA-CFAR.</p>
Full article ">Figure 6
<p>Phase compensation process in RCMC.</p>
Full article ">Figure 7
<p>Algorithm process of ambiguity resolution based on staggered PRF.</p>
Full article ">Figure 8
<p>Process of position and velocity inversion.</p>
Full article ">Figure 9
<p>Process of Kalman filter.</p>
Full article ">Figure 10
<p>Detection scene and original echo signal: (<b>a</b>) spatial configuration of radar target; (<b>b</b>) original echo signal in −40 dB.</p>
Full article ">Figure 11
<p>Result of coherent integration and parameters estimation: (<b>a</b>) RFT in parameter domain; (<b>b</b>) 2D-CA-CFAR in parameter domain.</p>
Full article ">Figure 12
<p>Results of MTD and the proposed algorithm: (<b>a</b>) echo of target 1 after MTD; (<b>b</b>) echo of target 2 after MTD; (<b>c</b>) echo of target 3 after MTD; (<b>d</b>) echo of target 1 by this algorithm; (<b>e</b>) echo of target 2 by this algorithm; and (<b>f</b>) echo of target 3 by this algorithm.</p>
Full article ">Figure 13
<p>Comparison of range and velocity of target 1 before and after ambiguity resolution by different Airship receivers: the above row is bistatic range, and the bottom row is radial velocity: (<b>a</b>,<b>c</b>) Airship R1; (<b>b</b>,<b>d</b>) Airship R2.</p>
Full article ">Figure 14
<p>The result of multistatic positioning combining BR and TDOA.</p>
Full article ">Figure 15
<p>Trajectory tracking based on Kalman filter.</p>
Full article ">Figure 16
<p>Relationship between echo SNR and detection probability.</p>
Full article ">Figure 17
<p>Error distribution of the position estimation and velocity inversion: (<b>a</b>) position error; and (<b>b</b>) velocity error.</p>
Full article ">Figure 18
<p>Distribution of error level: (<b>a</b>) position estimation; (<b>b</b>) MSE of position error in iteration; and (<b>c</b>) MSE of velocity error in iteration.</p>
Full article ">
23 pages, 1536 KiB  
Article
Enhancing Weather Target Detection with Non-Uniform Pulse Repetition Time (NPRT) Waveforms
by Luyao Sun and Tao Wang
Remote Sens. 2024, 16(23), 4435; https://doi.org/10.3390/rs16234435 - 27 Nov 2024
Viewed by 433
Abstract
The velocity/distance trade-off poses a fundamental challenge in pulsed Doppler weather radar systems and is known as the velocity/distance dilemma. Techniques such as multiple-pulse repetition frequency, staggered pulse repetition time (PRT), and pulse phase coding are commonly used to mitigate this issue. The [...] Read more.
The velocity/distance trade-off poses a fundamental challenge in pulsed Doppler weather radar systems and is known as the velocity/distance dilemma. Techniques such as multiple-pulse repetition frequency, staggered pulse repetition time (PRT), and pulse phase coding are commonly used to mitigate this issue. The current study evaluates the adaptability/capability of a specific type of low-capture signal called the non-uniform PRT (NPRT) through analyzing the weather target characteristics of typical velocity distributions. The spectral moments estimation (SME) signal-processing algorithm of the NPRT weather echo is designed to calculate the average power, velocity, and spectrum width of the target. A comprehensive error analysis is conducted to ascertain the efficacy of the NPRT processing algorithm under influencing factors. The results demonstrate that the spectral parameters of weather target echo with a velocity of [50,50] m/s through random-jitter NPRT signals align with radar functionality requirements (RFRs). Notably, the NPRT waveform resolves the inherent conflicts between the maximum unambiguous distance and velocity and elevates the upper limit of the maximal observation velocity. The evaluation results confirm that nonlinear radar signal processing technology can improve a radar’s detection performance and provide a new method for realizing the multifunctional observation of radar in different applications. Full article
Show Figures

Figure 1

Figure 1
<p>The velocity spectrum of a weather echo and the signal amplitude of its corresponding NPRT spectrum in the time domain, with a real power of 1 W, a velocity spectrum width of 4 m/s, an average radial velocity of 50 m/s and 15 m/s: (<b>a</b>,<b>b</b>) 50 m/s; (<b>c</b>,<b>d</b>) 15 m/s; (<b>e</b>) the time interval value of a random jitter of the generated 64-point NPRT waveform; (<b>f</b>) the result of the statistical average of the frequency spectrum of an NPRT waveform simulated 5000 times.</p>
Full article ">Figure 2
<p>The aliasing power spectrum of the weather echo and the target power spectrum obtained by the SWA algorithm, with a real weather power of 1 W, velocity spectrum width of 4 m/s, and average radial velocity of 50 m/s and 15 m/s: (<b>a</b>,<b>c</b>) 50 m/s, (<b>b</b>,<b>d</b>) 15 m/s, and the window width of different colors is 22 m/s.</p>
Full article ">Figure 3
<p>Target detection processing of the NPRT weather echo. In this figure, the P, V, and W represent the estimation of power, velocity, and spectrum width, respectively.</p>
Full article ">Figure 4
<p>Estimation errors of power, velocity, and spectrum width within an input velocity range of [−50, 50] m/s for the NPRT weather echo under different sliding windows when the true power is 0 dB and the spectrum width is 4 m/s. The left shows bias and right shows standard deviation, with 100 Monte Carlo simulations conducted under an RPRT of 1 ms.</p>
Full article ">Figure 5
<p>The estimation bias and standard deviation of power (dB), velocity (m/s), and spectrum width (m/s) of the NPRT weather echo under velocity changes of [−50, 50] m/s when the window width is 22 m/s, the power is 0 dB, and the spectrum width is 4 m/s. The P, V, and W represent the estimation error of the power, velocity, and spectrum width, respectively, with 100 Monte Carlo simulations carried out under an RPRT of 1 ms.</p>
Full article ">Figure 6
<p>Error comparison of the spectral moment estimation results between the NPRT and SPRT techniques when the true power is 0 dB and the spectrum width is 4 m/s, within an input velocity range of [−50, 50] m/s, over 100 Monte Carlo simulations.</p>
Full article ">Figure 7
<p>Estimation errors of the power, velocity, and spectrum width, within the input velocity range of [−50, 50] m/s, of an NPRT weather echo under different pulse numbers when the true power is 0 dB and the spectrum width is 4 m/s and a 22 m/s window width is used. The left shows bias and right shows standard deviation, with 100 Monte Carlo simulations conducted under an RPRT of 1 ms.</p>
Full article ">Figure 8
<p>The estimation bias of the power, velocity, and spectrum width of NPRT weather echoes under different pulse numbers and within the input velocity range of [−50, 50] m/s. The true power is 0 dB and the spectrum width is 4 m/s under a 22 m/s window width, with 100 Monte Carlo simulations conducted under an RPRT of 1 ms.</p>
Full article ">Figure 9
<p>Estimation bias and standard deviation (STD) of the power, velocity, and spectrum width after using the SWA algorithm to find the optimal window of the NPRT weather echo under different target spectrum widths. True power is 0 dB and the input velocity range is [−50, 50] m/s, with 100 Monte Carlo simulations conducted under an RPRT of 1 ms.</p>
Full article ">Figure 10
<p>Estimation errors of the power, velocity, and spectrum width, within the input velocity range of [−50, 50] m/s, of the NPRT weather echo under different SNRs. The left shows bias and the right shows standard deviation when the true power is 0 dB and the spectrum width is 4 m/s under a 22 m/s window width, with 100 Monte Carlo simulations conducted under an RPRT of 1 ms.</p>
Full article ">Figure 11
<p>The weather echo aliasing power spectrum under different RPRTs, with a real weather power of 1 W, velocity spectrum width of 4 m/s, NPRT pulse number of 64, and average radial velocity of 50 m/s and 15 m/s: (<b>a</b>,<b>c</b>) 50 m/s; (<b>b</b>,<b>d</b>) 15 m/s.</p>
Full article ">Figure 12
<p>Estimation errors of the power, velocity, and spectrum width, within the input velocity range of [−50, 50] m/s, of the NPRT weather echo under different sliding windows, when the power is 0 dB and the spectrum width is 4 m/s. The left shows the bias and the right shows the standard deviation under a 2 ms RPRT and with 100 Monte Carlo simulations carried out.</p>
Full article ">
18 pages, 21647 KiB  
Article
Modified Hybrid Integration Algorithm for Moving Weak Target in Dual-Function Radar and Communication System
by Wenshuai Ji, Tao Liu, Yuxiao Song, Haoran Yin, Biao Tian and Nannan Zhu
Remote Sens. 2024, 16(19), 3601; https://doi.org/10.3390/rs16193601 - 27 Sep 2024
Cited by 1 | Viewed by 863
Abstract
To detect moving weak targets in the dual function radar communication (DFRC) system of an orthogonal frequency division multiplexing (OFDM) waveform, a modified hybrid integration method is addressed in this paper. A high-speed aircraft can cause range walk (RW) and Doppler walk (DW), [...] Read more.
To detect moving weak targets in the dual function radar communication (DFRC) system of an orthogonal frequency division multiplexing (OFDM) waveform, a modified hybrid integration method is addressed in this paper. A high-speed aircraft can cause range walk (RW) and Doppler walk (DW), rendering traditional detection methods ineffective. To overcome RW and DW, this paper proposes an integration approach combining DFRC and OFDM. The proposed approach consists of two primary components: intra-frame coherent integration and hybrid multi-inter-frame integration. After the echo signal is re-fragmented into multiple subfragments, the first step involves integrating energy across fixed situations within intra-frames for each subcarrier. Subsequently, coherent integration is performed across the subfragments, followed by the application of a Radon transform (RT) to generate frames based on the properties derived from the coherent integration output. This paper provides detailed expressions and analyses for various performance metrics of our proposed method, including the communication bit error ratio (BER), responses of coherent and non-coherent outputs, and probability of detection. Simulation results demonstrate the effectiveness of our strategy. Full article
Show Figures

Figure 1

Figure 1
<p>Detailed processing flowchart of the MHI.</p>
Full article ">Figure 2
<p>Sketch map of the flowchart for intra-frame integration.</p>
Full article ">Figure 3
<p>Sketch map of the flowchart for inter-frame integration.</p>
Full article ">Figure 4
<p>Modified GRT integration path.</p>
Full article ">Figure 5
<p>Intra-frame integration results. (<b>a</b>) Range across echo, (<b>b</b>) intra-frame integration, (<b>c</b>) one frame integration result of a single target, (<b>d</b>) the distance slice of (<b>c</b>), (<b>e</b>) the Doppler slice of (<b>d</b>).</p>
Full article ">Figure 6
<p>Single target results for <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>S</mi> <mi>N</mi> <mi>R</mi> <mo>=</mo> <mn>20</mn> </mrow> </semantics></math> dB). (<b>a</b>) MTD results, (<b>b</b>) HI integration results, (<b>c</b>) proposed method’s result.</p>
Full article ">Figure 7
<p>Single target results for <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>S</mi> <mi>N</mi> <mi>R</mi> <mo>=</mo> <mo>−</mo> <mn>20</mn> </mrow> </semantics></math> dB). (<b>a</b>) MTD result, (<b>b</b>) HI integration result, (<b>c</b>) proposed method’s result.</p>
Full article ">Figure 8
<p>Simulation results for multiple targets (scenario 1). (<b>a</b>) Multiple-target echo distribution, (<b>b</b>) integration result on velocity–range plane, (<b>c</b>) integration result on range–acceleration plane, (<b>d</b>) integration result on velocity–acceleration plane.</p>
Full article ">Figure 9
<p>Simulation results for multiple targets (scenario 2). (<b>a</b>) Multiple-target echo distribution, (<b>b</b>) integration result on velocity–range plane, (<b>c</b>) integration result on range–acceleration plane, (<b>d</b>) integration result on velocity–acceleration plane.</p>
Full article ">Figure 10
<p>Simulation results for multiple targets (scenario 3). (<b>a</b>) Multiple-target echo distribution, (<b>b</b>) integration result on velocity–range plane, (<b>c</b>) integration result on range–acceleration plane, (<b>d</b>) integration result on velocity–acceleration plane.</p>
Full article ">Figure 11
<p>Communication BER results. (<b>a</b>) OFDM IRCS BER with AWGN, (<b>b</b>) BER with different modulation methods, (<b>c</b>) BER with different demodulation methods.</p>
Full article ">Figure 12
<p>Detection probabilities of different methods.</p>
Full article ">Figure 13
<p>Computational complexity of different methods.</p>
Full article ">
18 pages, 3476 KiB  
Article
Study of Millimeter-Wave Fuze Echo Characteristics under Rainfall Conditions Using the Monte Carlo Method
by Bing Yang, Zhe Guo, Kaiwei Wu and Zhonghua Huang
Appl. Sci. 2024, 14(18), 8352; https://doi.org/10.3390/app14188352 - 17 Sep 2024
Viewed by 703
Abstract
Due to the similarity in wavelength between millimeter-wave (MMW) signals and raindrop diameters, rainfall induces significant attenuation and scattering effects that challenge the detection performance of MMW fuzes in rainy environments. To enhance the adaptability of frequency-modulated MMW fuzes in such conditions, the [...] Read more.
Due to the similarity in wavelength between millimeter-wave (MMW) signals and raindrop diameters, rainfall induces significant attenuation and scattering effects that challenge the detection performance of MMW fuzes in rainy environments. To enhance the adaptability of frequency-modulated MMW fuzes in such conditions, the effects of rain on MMW signal attenuation and scattering are investigated. A mathematical model for the multipath echo signals of the fuze was developed. The Monte Carlo method was employed to simulate echo signals considering multiple scattering, and experimental validations were conducted. The results from simulations and experiments revealed that rainfall increases the bottom noise of the echo signal, with rain backscatter noise predominantly affecting the lower end of the echo signal spectrum. However, rain conditions below torrential levels did not significantly impact the detection of strong reflection targets at the high end of the spectrum. The modeling approach and findings presented offer theoretical support for designing MMW fuzes with improved environmental adaptability. Full article
Show Figures

Figure 1

Figure 1
<p>Attenuation coefficient.</p>
Full article ">Figure 2
<p>Rainfall scattering-phase function.</p>
Full article ">Figure 3
<p>Detection of MMW fuze under rainfall conditions, where the dots represent raindrops with different sizes, and the arrows indicate the propagation direction of MMW signals.</p>
Full article ">Figure 4
<p>Coordinate system of (<b>a</b>) particle launch, (<b>b</b>) target scattering, and (<b>c</b>) raindrop scattering. The letter A represents the actual scattering direction of the particles, B and C represent the projections of the actual scattering direction on the XOY and YOZ planes, respectively. The green arrow, red arrow, and the blue arrow represent the actual direction of particle incidence, the actual scattering direction and its projections of the particles, the specular reflection direction and its projections.</p>
Full article ">Figure 5
<p>Beat signal diagram at 9 m without rain: (<b>a</b>) time domain; (<b>b</b>) frequency domain.</p>
Full article ">Figure 6
<p>Beat signal spectrum at rainfall of 16 mm/h and operating frequency of 35 GHz: (<b>a</b>) 6 m, (<b>b</b>) 9 m, and (<b>c</b>) 12 m. The beat signal spectrum at rainfall of 50 mm/h and operating frequency of 35 GHz: (<b>d</b>) 6 m, (<b>e</b>) 9 m, and (<b>f</b>) 12 m. The beat signal spectrum at rainfall of 50 mm/h and operating frequency of 60 GHz: (<b>g</b>) 6 m, (<b>h</b>) 9 m, and (<b>i</b>) 12 m.</p>
Full article ">Figure 7
<p>Amplitude of (<b>a</b>) the target signal; (<b>b</b>) rain backscatter noise.</p>
Full article ">Figure 8
<p>Beat signal spectrum at distance of 9 m, rainfall rate of 50 mm/h, and operating frequency of 60 GHz for 3 dB beam widths of (<b>a</b>) 120°, (<b>b</b>) 60°, and (<b>c</b>) 30°.</p>
Full article ">Figure 9
<p>Detection scenarios: (<b>a</b>) 35 GHz and 60 GHz; (<b>b</b>) 24 GHz. The arrows in the figure connect the actual detection scene and the virtual schematic scene.</p>
Full article ">Figure 10
<p>24 GHz detector: (<b>a</b>) test results; (<b>b</b>) simulation result.</p>
Full article ">Figure 11
<p>Test results without rain and with torrential rain: (<b>a</b>) 35 GHz; (<b>b</b>) 60 GHz.</p>
Full article ">Figure 12
<p>Bottom noise test result (<b>a</b>) in [<a href="#B8-applsci-14-08352" class="html-bibr">8</a>]; (<b>b</b>) simulation.</p>
Full article ">
23 pages, 1814 KiB  
Article
Doppler-Spread Space Target Detection Based on Overlapping Group Shrinkage and Order Statistics
by Linsheng Bu, Tuo Fu, Defeng Chen, Huawei Cao, Shuo Zhang and Jialiang Han
Remote Sens. 2024, 16(18), 3413; https://doi.org/10.3390/rs16183413 - 13 Sep 2024
Viewed by 1094
Abstract
The Doppler-spread problem is commonly encountered in space target observation scenarios using ground-based radar when prolonged coherent integration techniques are utilized. Even when the translational motion is accurately compensated, the phase resulting from changes in the target observation attitude (TOA) still leads to [...] Read more.
The Doppler-spread problem is commonly encountered in space target observation scenarios using ground-based radar when prolonged coherent integration techniques are utilized. Even when the translational motion is accurately compensated, the phase resulting from changes in the target observation attitude (TOA) still leads to extension of the target’s echo energy across multiple Doppler cells. In particular, as the TOA change undergoes multiple cycles within a coherent processing interval (CPI), the Doppler spectrum spreads into equidistant sparse line spectra, posing a substantial challenge for target detection. Aiming to address such problems, we propose a generalized likelihood ratio test based on overlapping group shrinkage denoising and order statistics (OGSos-GLRT) in this study. First, the Doppler domain signal is denoised according to its equidistant sparse characteristics, allowing for the recovery of Doppler cells where line spectra may be situated. Then, several of the largest Doppler cells are integrated into the GLRT for detection. An analytical expression for the false alarm probability of the proposed detector is also derived. Additionally, a modified OGSos-GLRT method is proposed to make decisions based on an increasing estimated number of line spectra (ENLS), thus increasing the robustness of OGSos-GLRT when the ENLS mismatches the actual value. Finally, Monte Carlo simulations confirm the effectiveness of the proposed detector, even at low signal-to-noise ratios (SNRs). Full article
(This article belongs to the Special Issue Remote Sensing: 15th Anniversary)
Show Figures

Figure 1

Figure 1
<p>Normalized echo signals with <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>N</mi> <mi>R</mi> <mo>=</mo> <mo>−</mo> <mn>12</mn> </mrow> </semantics></math> dB, <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math>: (<b>a</b>) slow−time domain. (<b>b</b>) Doppler domain.</p>
Full article ">Figure 2
<p>Doppler sequence with <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>N</mi> <mi>R</mi> <mo>=</mo> <mo>−</mo> <mn>12</mn> </mrow> </semantics></math> dB, <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>I</mi> <mo>=</mo> <mn>16</mn> </mrow> </semantics></math>: (<b>a</b>) before denoising. (<b>b</b>) After denoising. (<b>c</b>) Interval-updating process.</p>
Full article ">Figure 3
<p>Doppler sequence with <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>N</mi> <mi>R</mi> <mo>=</mo> <mo>−</mo> <mn>12</mn> </mrow> </semantics></math> dB, <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>I</mi> <mo>=</mo> <mn>17</mn> <mo>,</mo> <mn>17</mn> <mo>,</mo> <mn>16</mn> <mo>,</mo> <mn>17</mn> <mo>,</mo> <mn>16</mn> <mo>,</mo> <mn>16</mn> <mo>,</mo> <mn>16</mn> </mrow> </semantics></math>: (<b>a</b>) before denoising. (<b>b</b>) After denoising.</p>
Full article ">Figure 4
<p>Schematic of the proposed detector.</p>
Full article ">Figure 5
<p>Probability of false alarm: (<b>a</b>) comparison between the analytical expression and Monte Carlo simulations. (<b>b</b>) Sensitivity analysis with respect to noise power.</p>
Full article ">Figure 6
<p>Pds versus SNR of OGSos−GLRT with Model 1 for <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>45</mn> <mo>,</mo> <mn>50</mn> <mo>,</mo> <mn>55</mn> <mo>,</mo> <mn>60</mn> <mo>,</mo> <mn>65</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>Pds versus SNR of MF−GLRT, OGSos−GLRT, OS−GLRT, and NLSD−GLRT for Models 1−4.</p>
Full article ">Figure 8
<p>Pds versus SNR of OGSos−GLRT, OS−GLRT, and NLSD−GLRT with the 4 models for <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>320</mn> <mo>,</mo> <mn>640</mn> <mo>,</mo> <mn>1280</mn> <mo>,</mo> <mn>2560</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p>Collapsing loss versus number of Doppler cells for OGSos−GLRT, OS−GLRT, and NLSD−GLRT with Model 3 at <math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mrow> <mi>f</mi> <mi>a</mi> </mrow> </msub> <mo>=</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>4</mn> </mrow> </msup> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mi>d</mi> </msub> <mo>=</mo> <mn>95</mn> <mo>%</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 10
<p>Pds versus SNR of OGSos−GLRT and OS−GLRT with Model 3 for <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi>σ</mi> </msub> <mo>=</mo> <mn>20</mn> <mo>,</mo> <mn>10</mn> <mo>,</mo> <mn>5</mn> <mo>,</mo> <mn>2.5</mn> </mrow> </semantics></math> s.</p>
Full article ">Figure 11
<p>Pds versus SNR of MF−GLRT, OGSos−GLRT, and OS−GLRT with Model 1 for <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>4</mn> <mo>,</mo> <mn>8</mn> <mo>,</mo> <mn>16</mn> <mo>,</mo> <mn>32</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 12
<p>Pds versus SNR of OGSos−GLRT and OS−GLRT with Model 1 for <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mover accent="true"> <mi>L</mi> <mo>^</mo> </mover> <mo>=</mo> <mn>2</mn> <mo>,</mo> <mn>4</mn> <mo>,</mo> <mn>8</mn> <mo>,</mo> <mn>12</mn> <mo>,</mo> <mn>16</mn> <mo>,</mo> <mn>32</mn> </mrow> </semantics></math>; mismatch loss of OGSos−GLRT and OS−GLRT versus <math display="inline"><semantics> <mover accent="true"> <mi>L</mi> <mo>^</mo> </mover> </semantics></math> with <math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mi>d</mi> </msub> <mo>=</mo> <mn>95</mn> <mo>%</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 13
<p>Schematic of the modification to OGSos−GLRT.</p>
Full article ">Figure 14
<p>Pds versus SNR of OGSos−GLRT (in the match case) and modified OGSos−GLRT with Model 1 for <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>4</mn> <mo>,</mo> <mn>8</mn> <mo>,</mo> <mn>16</mn> <mo>,</mo> <mn>32</mn> </mrow> </semantics></math>.</p>
Full article ">
19 pages, 5746 KiB  
Article
Dual-Wavelength LiDAR with a Single-Pixel Detector Based on the Time-Stretched Method
by Simin Chen, Shaojing Song, Yicheng Wang, Hao Pan, Fashuai Li and Yuwei Chen
Sensors 2024, 24(17), 5741; https://doi.org/10.3390/s24175741 - 4 Sep 2024
Viewed by 827
Abstract
In the fields of agriculture and forestry, the Normalized Difference Vegetation Index (NDVI) is a critical indicator for assessing the physiological state of plants. Traditional imaging sensors can only collect two-dimensional vegetation distribution data, while dual-wavelength LiDAR technology offers the capability to capture [...] Read more.
In the fields of agriculture and forestry, the Normalized Difference Vegetation Index (NDVI) is a critical indicator for assessing the physiological state of plants. Traditional imaging sensors can only collect two-dimensional vegetation distribution data, while dual-wavelength LiDAR technology offers the capability to capture vertical distribution information, which is essential for forest structure recovery and precision agriculture management. However, existing LiDAR systems face challenges in detecting echoes at two wavelengths, typically relying on multiple detectors or array sensors, leading to high costs, bulky systems, and slow detection rates. This study introduces a time-stretched method to separate two laser wavelengths in the time dimension, enabling a more cost-effective and efficient dual-spectral (600 nm and 800 nm) LiDAR system. Utilizing a supercontinuum laser and a single-pixel detector, the system incorporates specifically designed time-stretched transmission optics, enhancing the efficiency of NDVI data collection. We validated the ranging performance of the system, achieving an accuracy of approximately 3 mm by collecting data with a high sampling rate oscilloscope. Furthermore, by detecting branches, soil, and leaves in various health conditions, we evaluated the system’s performance. The dual-wavelength LiDAR can detect variations in NDVI due to differences in chlorophyll concentration and water content. Additionally, we used the radar equation to analyze the actual scene, clarifying the impact of the incidence angle on reflectance and NDVI. Scanning the Red Sumach, we obtained its NDVI distribution, demonstrating its physical characteristics. In conclusion, the proposed dual-wavelength LiDAR based on the time-stretched method has proven effective in agricultural and forestry applications, offering a new technological approach for future precision agriculture and forest management. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

Figure 1
<p>Diagram of the dual-wavelength multi-spectral LiDAR system architecture.</p>
Full article ">Figure 2
<p>Physical image of the supercontinuum laser.</p>
Full article ">Figure 3
<p>(<b>a</b>) Physical image of APD 210. (<b>b</b>) Spectral response curve of APD 210.</p>
Full article ">Figure 4
<p>Dual-wavelength LiDAR demonstration instrument in the laboratory test.</p>
Full article ">Figure 5
<p>(<b>a</b>) The intensity of leaf collected by the system at distances of 10 m, 20 m, 30 m, 40 m, and 50 m; (<b>b</b>) is the corresponding reflectance calibrated by a standard SRB.</p>
Full article ">Figure 6
<p>The intensity of SRB collected by the system at distances of 10 m, 20 m, 30 m, 40 m, and 50 m.</p>
Full article ">Figure 7
<p>Photos of green leaves, dry leaves, diseased leaves, branches, and soil were selected in the experiment.</p>
Full article ">Figure 8
<p>Echo waveform of (<b>a</b>) green leaves, (<b>b</b>) dry leaves, (<b>c</b>) yellow branches, and (<b>d</b>) soil.</p>
Full article ">Figure 8 Cont.
<p>Echo waveform of (<b>a</b>) green leaves, (<b>b</b>) dry leaves, (<b>c</b>) yellow branches, and (<b>d</b>) soil.</p>
Full article ">Figure 9
<p>(<b>a</b>) Reflectance at 800 nm (blue) and 600 nm (red) as well as (<b>b</b>) NDVI of green leaf, ill leaf (the unhealthy part), ill leaf (the healthy part), dry leaf, green branch, yellow branch, and soil.</p>
Full article ">Figure 10
<p>(<b>a</b>) Photo of Red Sumach; (<b>b</b>) 600 nm echo point cloud of Red Sumach at 10 m; (<b>c</b>) 800 nm echo point cloud of Red Sumach at 10 m.</p>
Full article ">Figure 11
<p>NDVI point cloud map of Red Sumach.</p>
Full article ">
33 pages, 18210 KiB  
Article
Ultrafast Brain MRI at 3 T for MS: Evaluation of a 51-Second Deep Learning-Enhanced T2-EPI-FLAIR Sequence
by Martin Schuhholz, Christer Ruff, Eva Bürkle, Thorsten Feiweier, Bryan Clifford, Markus Kowarik and Benjamin Bender
Diagnostics 2024, 14(17), 1841; https://doi.org/10.3390/diagnostics14171841 - 23 Aug 2024
Viewed by 1309
Abstract
In neuroimaging, there is no equivalent alternative to magnetic resonance imaging (MRI). However, image acquisitions are generally time-consuming, which may limit utilization in some cases, e.g., in patients who cannot remain motionless for long or suffer from claustrophobia, or in the event of [...] Read more.
In neuroimaging, there is no equivalent alternative to magnetic resonance imaging (MRI). However, image acquisitions are generally time-consuming, which may limit utilization in some cases, e.g., in patients who cannot remain motionless for long or suffer from claustrophobia, or in the event of extensive waiting times. For multiple sclerosis (MS) patients, MRI plays a major role in drug therapy decision-making. The purpose of this study was to evaluate whether an ultrafast, T2-weighted (T2w), deep learning-enhanced (DL), echo-planar-imaging-based (EPI) fluid-attenuated inversion recovery (FLAIR) sequence (FLAIRUF) that has targeted neurological emergencies so far might even be an option to detect MS lesions of the brain compared to conventional FLAIR sequences. Therefore, 17 MS patients were enrolled prospectively in this exploratory study. Standard MRI protocols and ultrafast acquisitions were conducted at 3 tesla (T), including three-dimensional (3D)-FLAIR, turbo/fast spin-echo (TSE)-FLAIR, and FLAIRUF. Inflammatory lesions were grouped by size and location. Lesion conspicuity and image quality were rated on an ordinal five-point Likert scale, and lesion detection rates were calculated. Statistical analyses were performed to compare results. Altogether, 568 different lesions were found. Data indicated no significant differences in lesion detection (sensitivity and positive predictive value [PPV]) between FLAIRUF and axially reconstructed 3D-FLAIR (lesion size ≥3 mm × ≥2 mm) and no differences in sensitivity between FLAIRUF and TSE-FLAIR (lesion size ≥3 mm total). Lesion conspicuity in FLAIRUF was similar in all brain regions except for superior conspicuity in the occipital lobe and inferior conspicuity in the central brain regions. Further findings include location-dependent limitations of signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) as well as artifacts such as spatial distortions in FLAIRUF. In conclusion, FLAIRUF could potentially be an expedient alternative to conventional methods for brain imaging in MS patients since the acquisition can be performed in a fraction of time while maintaining good image quality. Full article
(This article belongs to the Special Issue Artificial Intelligence in Brain Diseases)
Show Figures

Figure 1

Figure 1
<p>Flowchart of study participants. Note. MRI = magnetic resonance imaging; MS = multiple sclerosis.</p>
Full article ">Figure 2
<p>Classification of TP<sub>GS</sub> lesions. Note. TP<sub>GS</sub> lesions = total number of true-positive lesions detected using all contrasts available (gold standard); FLAIR<sub>3Da</sub> = axial reconstruction of FLAIR<sub>3D</sub>; FLAIR<sub>UF</sub> = ultrafast axial FLAIR; tseTP<sub>GS</sub> = subset of TP<sub>GS</sub> recorded with FLAIR<sub>TSE</sub>; FLAIR<sub>TSE</sub> = axial standard TSE-FLAIR; GS = utilization of all contrasts available, particularly T2-FLAIR, T1, and T2; TP = true-positives; FN = false-negatives; 1 = better/larger in the FLAIR<sub>UF</sub> images compared to FLAIR<sub>3Da</sub>; 2 = equal compared to FLAIR<sub>3Da</sub>; 3 = better in the FLAIR<sub>3Da</sub> images, but classified as a lesion using only the FLAIR<sub>UF</sub> images; 4 = better in the FLAIR<sub>3Da</sub> images and classified as no lesion using only the FLAIR<sub>UF</sub> images; 5 = FLAIR<sub>3Da</sub> lesion that is not at all visible in the FLAIR<sub>UF</sub> images; PV = periventricular; (J-)C = (juxta-)cortical; I = infratentorial (brainstem, cerebellum); F = frontal; P = parietal; T = temporal; O = occipital; C = central (insular lobe, corpus callosum, basal nuclei, diencephalon).</p>
Full article ">Figure 3
<p>Classification of FP lesions. Note. FP = false-positive; further abbreviations as in <a href="#diagnostics-14-01841-f002" class="html-fig">Figure 2</a>.</p>
Full article ">Figure 4
<p>Correlations between TP<sub>UF</sub>, FN<sub>UF</sub>, TP<sub>3Da</sub>, FN<sub>3Da</sub>, and TP<sub>GS</sub> lesion counts. Schematic illustration (<b>a</b>) and contingency table (<b>b</b>). Note. TP<sub>GS</sub> = number of true-positive lesions detected using all contrasts available (gold standard); TP<sub>UF</sub> = true-positive lesions detected in the FLAIR<sub>UF</sub> images; FN<sub>UF</sub> = false-negative lesions using the FLAIR<sub>UF</sub> images; TP<sub>3Da</sub> = true-positive lesions detected in the FLAIR<sub>3Da</sub> images; FN<sub>3Da</sub> = false-negative lesions using the FLAIR<sub>3Da</sub> images.</p>
Full article ">Figure 5
<p>Contingency tables correlating TP<sub>UF</sub>, FN<sub>UF</sub>, TP<sub>3Da</sub>, FN<sub>3Da</sub>, and TP<sub>GS</sub> lesion counts, grouped by size, according to <a href="#diagnostics-14-01841-f004" class="html-fig">Figure 4</a>. Note. Abbreviations as in <a href="#diagnostics-14-01841-f004" class="html-fig">Figure 4</a>. <sup>1</sup> 191 of them were ‘characteristic MS lesions’, including 109 periventricular lesions, 44 infratentorial lesions, and 38 (juxta-)cortical lesions.</p>
Full article ">Figure 6
<p>Sensitivity and PPV in terms of lesion detection, using FLAIR<sub>UF</sub> images (gray) and FLAIR<sub>3Da</sub> images (white). Four groups, which represent different lesion diameters, are displayed, respectively. The additional error bars denote the 95% CIs: (<b>a</b>) For wide lesions, no significant difference in sensitivity could be found. For the groups that comprise smaller lesions, however, the sensitivity was significantly inferior using the FLAIR<sub>UF</sub> images, decreasing more and more as a function of lesion diameter. (<b>b</b>) No significant differences in PPV were found for any of the large lesion groups (large, wide, and narrow). For small lesions, the PPV in the FLAIR<sub>UF</sub> images was moderately lower compared to the FLAIR<sub>3Da</sub> group (no overlap between confidence intervals). Note. Abbreviations as in <a href="#diagnostics-14-01841-t003" class="html-table">Table 3</a>. * <span class="html-italic">p</span> &lt; 0.05.</p>
Full article ">Figure 7
<p>FN<sub>UF</sub> lesions (top row) contrasted with their TP<sub>3Da</sub> counterparts (arrow, bottom row). The images also show several other lesions: (<b>a</b>) Lesion in the splenium of the corpus callosum (approx. 3 mm × 2 mm). Causes for it not being detected may be image noise and slice thickness/slice gaps. (<b>b</b>) Left frontal lesion (approx. 3 mm × 1 mm). Causes may be associated with slice thickness/contrast as well as image noise. (<b>c</b>) Thin right frontal lesion (approx. 7 mm × 1 mm). It was mistaken for cortex in the FLAIR<sub>UF</sub> image. (<b>d</b>) Left temporopolar lesion (approx. 3 mm × 2 mm). It was not recognized as such in the FLAIR<sub>UF</sub> image owing to commonly occurring distortions within this region. Note. Corresponding slices could not be positioned exactly identically for two reasons: Different slice thicknesses including slice gaps (<b>c</b>) and non-parallel slice inclinations (<b>d</b>).</p>
Full article ">Figure 8
<p>Correlations between tseTP<sub>UF</sub>, tseFN<sub>UF</sub>, TP<sub>TSE</sub>, FN<sub>TSE</sub>, and tseTP<sub>GS</sub> lesion counts. Schematic illustration (<b>a</b>) and contingency table (<b>b</b>). Note. TP<sub>GS</sub> = number of true-positive lesions detected using all contrasts available (gold standard); tseTP<sub>GS</sub> = subset of TP<sub>GS</sub> recorded with FLAIR<sub>TSE</sub>; tseTP<sub>UF</sub> = corresponding subset of TP<sub>UF</sub> recorded with FLAIR<sub>TSE</sub>; tseFN<sub>UF</sub> = corresponding subset of FN<sub>UF</sub> recorded with FLAIR<sub>TSE</sub>; TP<sub>TSE</sub> = true-positive lesions detected in the FLAIR<sub>TSE</sub> images; FN<sub>TSE</sub> = false-negative lesions using the FLAIR<sub>TSE</sub> images; further abbreviations as in <a href="#diagnostics-14-01841-f004" class="html-fig">Figure 4</a>.</p>
Full article ">Figure 9
<p>Contingency tables including a subset of those TP<sub>GS</sub> lesions covered by the FLAIR<sub>TSE</sub> sequence (tseTP<sub>GS</sub>). tseTP<sub>UF</sub>, tseFN<sub>UF</sub>, TP<sub>TSE</sub>, FN<sub>TSE</sub>, and tseTP<sub>GS</sub> lesion counts are correlated, and grouped by size, according to <a href="#diagnostics-14-01841-f008" class="html-fig">Figure 8</a>. Note. Abbreviations as in <a href="#diagnostics-14-01841-f008" class="html-fig">Figure 8</a>. <sup>1</sup> One was FN using FLAIR<sub>3Da</sub>; the rest were TP in FLAIR<sub>3Da</sub>. <sup>2</sup> Two were FN using FLAIR<sub>3Da</sub>; the rest were TP in FLAIR<sub>3Da</sub>. <sup>3</sup> Three were FN using FLAIR<sub>3Da</sub>; the rest were TP in FLAIR<sub>3Da</sub>. <sup>4</sup> Four were FN using FLAIR<sub>3Da</sub>; the rest were TP in FLAIR<sub>3Da</sub>. Lesion counts without superscripts were all TP in FLAIR<sub>3Da</sub>.</p>
Full article ">Figure 10
<p>An MS patient received an MRI scan including three T2w FLAIR sequences: FLAIR<sub>UF</sub> (<b>a</b>), FLAIR<sub>TSE</sub> (<b>b</b>), and FLAIR<sub>3D</sub> (<b>c</b>). Five lesions can be seen in each picture, situated in the frontoparietal region.</p>
Full article ">Figure 11
<p>Sensitivity in terms of lesion detection, using FLAIR<sub>TSE</sub> images (white) and corresponding FLAIR<sub>UF</sub> images (gray). Four groups, which represent different lesion diameters, are displayed, respectively. The additional error bars denote the 95% CIs. No significant differences in sensitivity could be detected for any of the lesion groups (<span class="html-italic">p</span> &gt; 0.05). For small lesions, however, the data suggest a lower sensitivity using the FLAIR<sub>UF</sub> images compared to the FLAIR<sub>TSE</sub> images. Results imply that there is a correlation between lesion detectability and size in both cases, though. Note. Abbreviations as in <a href="#diagnostics-14-01841-t007" class="html-table">Table 7</a>.</p>
Full article ">Figure 12
<p>Conspicuity ratings of lesions in the FLAIR<sub>UF</sub> images, categorized by size and location. Lesion conspicuity was significantly superior for large lesions compared to small lesions. Unless for small lesions, there was a significant difference among some brain regions for large lesions. This could be attributed to occipital lesions (superior) and central lesions (inferior). Note. Black = 1; dark gray = 2; gray = 3; light gray = 4; white = 5. Abbreviations and brain regions as in <a href="#diagnostics-14-01841-t011" class="html-table">Table 11</a>. * <span class="html-italic">p</span> &lt; 0.05.</p>
Full article ">Figure 13
<p>SNR and CNR in FLAIR<sub>UF</sub> images (top) and FLAIR<sub>3Da</sub> images (bottom): (<b>a</b>) Inflammatory lesions. Continuous arrow: Large lesion in the right mesencephalon (approx. 8 mm × 6 mm). The SNR appeared reduced in the center of the FLAIR<sub>UF</sub> image, thus decreasing the lesion conspicuity. Dotted arrows: Temporal lesions (right: approx. 5 mm × 4 mm; left: approx. 4 mm × 3 mm). The SNR appeared significantly improved in posterior brain regions in the FLAIR<sub>UF</sub> images, thus equaling lesion conspicuity between the image variants. Note that in FLAIR<sub>3Da</sub>, the left lesion was better visible in the adjacent image slice (not depicted). Arrowhead: Partially imaged, right temporal lesion (approx. 3 mm × 1 mm in the slice image depicted). Comparison of adjacent slice images showed equal lesion conspicuity. (<b>b</b>) Arrow: Left frontal lesion (approx. 2 mm × 1 mm). Excellent lesion conspicuity in the FLAIR<sub>UF</sub> image due to very good SNR and CNR. (<b>c</b>) Infratentorial lesions. Continuous arrow: Large lesion (approx. 9 mm × 6 mm) that was not visible in the FLAIR<sub>UF</sub> image owing to low SNR and CNR. Dotted arrow: Large lesion (approx. 8 mm × 6 mm) that was less visible in the FLAIR<sub>UF</sub> image due to reduced SNR and CNR. Arrowhead: Large lesion (approx. 3 mm × 2 mm) that was better visible in the FLAIR<sub>UF</sub> image. Note that the SNR improves toward the outer regions of the FLAIR<sub>UF</sub> image.</p>
Full article ">Figure 14
<p>Spatial distortion artifacts in the FLAIR<sub>UF</sub> images: (<b>a</b>) Frontal distortion artifact limiting the diagnostic information in that region. (<b>b</b>) Frontobasal distortion artifacts resulting in limited diagnostic information from that region. (<b>c</b>) Temporopolar distortion artifacts limiting the diagnostic information in the vicinity. In contrast, the cerebellar and pontine distortion artifacts do not limit diagnostic information.</p>
Full article ">Figure 15
<p>Infratentorial pulsatile flow artifacts that severely limit diagnostic information in the FLAIR<sub>UF</sub> images and the FLAIR<sub>3Da</sub> images: (<b>a</b>) FLAIR<sub>UF</sub> (left) and FLAIR<sub>3Da</sub> as a reference (right), cerebellum: The continuous arrows depict an inflammatory true-positive lesion (approx. 4 mm × 3 mm). The dotted arrow indicates a pulsation artifact that is prone to being confused with a lesion. It was counted as a small, false-positive lesion (approx. 2 mm × 1 mm). The arrowhead points to a pulsation artifact that is not likely to be confused with a lesion due to its typical location adjacent to the occipital sinus. (<b>b</b>) FLAIR<sub>3Da</sub> (left) and FLAIR<sub>UF</sub> as a reference (right), at the level of the pons: Top images: Typical hyperintense artifact band in the FLAIR<sub>3Da</sub> image; the arrowhead points to an intensely hyperintense spot within the artifact region that is part of the grainy texture of the artifact. Possible lesions within the artifact region would have been masked completely. Middle images: the continuous arrows show a large lesion (approx. 8 mm × 3 mm) that was misinterpreted as part of the pulsation artifact in the FLAIR<sub>3Da</sub> image. Bottom images: The dotted arrow denotes a small, false-positive FLAIR<sub>3Da</sub> lesion (approx. 2 mm × 2 mm) that turned out to be part of the pulsation artifact. Note. Corresponding slices could not be positioned exactly identically owing to different slice thicknesses including slice gaps and non-parallel slice inclinations.</p>
Full article ">Figure 16
<p>Minor artifacts in the FLAIR<sub>UF</sub> images: (<b>a</b>) Supratentorial pulsation artifacts, caused by extracerebral blood vessels: hyperintense (left image) or hypointense (right image). Hyperintense artifacts can usually be distinguished easily from a lesion, owing to its well-defined, sharply demarcated margin in relation to its size. Hyperintense and hypointense artifacts can usually be related to distant blood vessels, shifted along the phase encoding direction at fixed intervals corresponding to k-space sampling patterns (a quarter of the field of view for the protocol used in our study). Hyperintense artifacts are often located directly adjacent to hypointense artifacts in neighboring image slices. (<b>b</b>) Rare chemical shift artifacts due to incomplete fat suppression, in the shape of hyperintense (left image) or hypointense (right image) frontal streaks. (<b>c</b>) Rare residual aliasing, in the shape of a subtle hyperintense right central streak. (<b>d</b>) Rare spike artifacts, appearing in a herringbone pattern. (<b>e</b>) Supratentorial pulsation artifacts, caused by parenchymal blood vessels (singular findings): hyperintense (small FP lesion), insular cortex (left image), and hypointense, anterior limb of internal capsule (right image). They could not be distinguished using the FLAIR<sub>3D</sub> sequence, however, were clearly correlated with contrast-enhanced images. Also, note the ventricular cerebrospinal fluid pulsation artifact in the right image.</p>
Full article ">Figure 17
<p>Infratentorial pulsatile flow artifacts in FLAIR<sub>TSE</sub> (left), contrasted with corresponding FLAIR<sub>UF</sub> slices (right). The artifact regions are marked with dotted arrows. They appear as irregular, streaky bands traversing the cerebellum from the right sigmoid sinus to the left sigmoid sinus: (<b>a</b>) Artifact-induced hyper- and hypointense dots across the cerebellum; the FLAIR<sub>UF</sub> image confirms that there is no actual lesion in this area. (<b>b</b>) Artifact-induced hyper- and hypointense streaks across the cerebellum; the FLAIR<sub>UF</sub> image confirms that there is one actual lesion (continuous arrow) in this area. Note. Corresponding slices could not be positioned exactly identically owing to different slice intervals and non-parallel slice inclinations.</p>
Full article ">Figure 18
<p>Positional dependence of SNR and CNR within the FLAIR<sub>UF</sub> images: (<b>a</b>) Image slice at the level of the basal nuclei. The SNR deteriorates toward the center of the image, whereas the CNR remains relatively good. Note that both the SNR and CNR are excellent in the marginal regions of the cerebral cortex, e.g., in the occipital lobe. (<b>b</b>) Image slice at the level of the pons and the cerebellum. The SNR is substandard around the pons and improves exceedingly toward the posterior lobe of the cerebellum to an excellent level. The CNR, however, seems generally slightly substandard in this area compared to other brain regions, even in the most posterior regions. This may be associated with the characteristic, fine folium-sulcus texture of the cerebellum (e.g., vermis or posterior lobe), which cannot be distinguished clearly.</p>
Full article ">
22 pages, 31710 KiB  
Article
FMCW Laser Fuze Structure with Multi-Channel Beam Based on 3D Particle Collision Scattering Model under Smoke Interference
by Zhe Guo, Bing Yang, Kaiwei Wu, Yanbin Liang, Shijun Hao and Zhonghua Huang
Sensors 2024, 24(16), 5395; https://doi.org/10.3390/s24165395 - 21 Aug 2024
Viewed by 913
Abstract
In the environment of smoke and suspended particles, the accurate detection of targets is one of the difficulties for frequency-modulated continuous-wave (FMCW) laser fuzes to work properly in harsh conditions. To weaken and eliminate the significant influence caused by the interaction of different [...] Read more.
In the environment of smoke and suspended particles, the accurate detection of targets is one of the difficulties for frequency-modulated continuous-wave (FMCW) laser fuzes to work properly in harsh conditions. To weaken and eliminate the significant influence caused by the interaction of different systems in the photon transmission process and the smoke particle environment, it is necessary to increase the amplitude of the target echo signal to improve the signal-to-noise ratio (SNR), which contributes to enhancing the detection performance of the laser fuze for the ground target in the smoke. Under these conditions, the particle transmission of photons in the smoke environment is studied from the perspective of three-dimentional (3D) collisions between photons and smoke particles, and the modeling and Unity3D simulation of FMCW laser echo signal based on 3D particle collision is conducted. On this basis, a laser fuze structure based on multiple channel beam emission is designed for the combined effect of particle features from different systems and its impact on the target characteristics is researched. Simulation results show that the multiple channel laser emission enhances the laser target echo signal amplitude and also improves the anti-interference ability against the combined effects of multiple particle features compared with the single channel. Through the validation based on the laser prototype with four-channel beam emitting, the above conclusions are supported by the experimental results. Therefore, this study not only reveals the laser target properties under the 3D particle collision perspective, but also reflects the reasonableness and effectiveness of utilizing the target characteristics in the 3D particle collision mode to enhance the detection performance of FMCW laser fuze in the smoke. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

Figure 1
<p>Two ways of photon collision with smoke particle based on the sphere: (<b>a</b>) 2D collision ignoring the particle space shape, (<b>b</b>) 3D collision approach existing with the particle space shape.</p>
Full article ">Figure 2
<p>Single scattering angle <math display="inline"><semantics> <msub> <mi>θ</mi> <mn>1</mn> </msub> </semantics></math>, collision angle <math display="inline"><semantics> <msub> <mi>θ</mi> <mrow> <mi>p</mi> <mi>s</mi> </mrow> </msub> </semantics></math>, velocity deflection angle <math display="inline"><semantics> <msub> <mi>θ</mi> <mrow> <mi>v</mi> <mi>r</mi> </mrow> </msub> </semantics></math> and collision parameter <math display="inline"><semantics> <msub> <mi>b</mi> <mrow> <mi>p</mi> <mi>s</mi> </mrow> </msub> </semantics></math> based on rigid-body hard sphere scattering.</p>
Full article ">Figure 3
<p>Maximum velocity deflection solid angle <math display="inline"><semantics> <mo>Ω</mo> </semantics></math>, azimuth <math display="inline"><semantics> <msub> <mi>φ</mi> <mrow> <mi>s</mi> <mi>e</mi> </mrow> </msub> </semantics></math> and collision cross-section for 3D collisions of photons in smoke environments.</p>
Full article ">Figure 4
<p>The variation of velocity vector in the non-centric collision mode.</p>
Full article ">Figure 5
<p>Photon motion process based on three types in smoke environment.</p>
Full article ">Figure 6
<p>FMCW detection principle based on linear FM system with triangular wave.</p>
Full article ">Figure 7
<p>Simulation process of FMCW laser beat signal based on 3D particle dynamic collision.</p>
Full article ">Figure 8
<p>Laser beat signal spectrums with different features of photon transmission: (<b>a</b>) Emission number, (<b>b</b>) divergence angle, (<b>c</b>) receiving field of view.</p>
Full article ">Figure 9
<p>Laser beat signal spectrums with different features of smoke particles: (<b>a</b>) Number, (<b>b</b>) size, (<b>c</b>) height position.</p>
Full article ">Figure 10
<p>The structure and detection process based on multiple channel laser emission.</p>
Full article ">Figure 11
<p>The main transmission process of photon in the single-channel emission structure.</p>
Full article ">Figure 12
<p>Amplitude–frequency characteristics of the beat signal spectrum in the single-channel emission structure: (<b>a</b>) Amplitude–frequency characteristics for a visibility of 12 m, (<b>b</b>) amplitude–frequency characteristics for a visibility of 15 m.</p>
Full article ">Figure 13
<p>The main transmission process of photon in the four-channel emission structure.</p>
Full article ">Figure 14
<p>Amplitude–frequency characteristics of the beat signal spectrum in the four-channel emission structure: (<b>a</b>) Amplitude–frequency characteristics for a visibility of 12 m, (<b>b</b>) amplitude–frequency characteristics for a visibility of 15 m.</p>
Full article ">Figure 15
<p>Amplitude–frequency characteristics of the beat signal spectrum based on the target echo signal components at visibility conditions of 15 m: (<b>a</b>) Amplitude–frequency characteristics in a single-transmission structure, (<b>b</b>) amplitude–frequency characteristics in four -transmission structure.</p>
Full article ">Figure 16
<p>Spectrum and amplitude ratio of beat signals under the multiple laser structures: (<b>a</b>) Beat signal spectrum at the visibility of 12 m, (<b>b</b>) beat signal spectrum at the visibility of 15 m.</p>
Full article ">Figure 17
<p>Principle of FMCW laser detection system operation.</p>
Full article ">Figure 18
<p>Laser detection system and laser spots.</p>
Full article ">Figure 19
<p>Schematic diagram of laser target characteristic test and set in the smoke.</p>
Full article ">Figure 20
<p>The actual smoke test scene.</p>
Full article ">Figure 21
<p>Generation of smoke environments and diffusion processes of particles.</p>
Full article ">Figure 22
<p>Beat signals and their spectrums based on single- and four-channel laser emission structures at visibility of 3.67 m: (<b>a</b>) Single-channel laser emission structure, (<b>b</b>) four-channel laser emission structure.</p>
Full article ">Figure 23
<p>Beat signals and their spectrums based on single- and four-channel laser emission structures at visibility of 4.47 m: (<b>a</b>) Single-channel laser emission structure, (<b>b</b>) four-channel laser emission structure.</p>
Full article ">Figure 24
<p>Beat signals and their spectrums based on single- and four-channel laser emission structures at visibility of 5.69 m: (<b>a</b>) Single-channel laser emission structure, (<b>b</b>) four-channel laser emission structure.</p>
Full article ">Figure 25
<p>The beat signal and its spectrum in the smoke-free environment.</p>
Full article ">Figure 26
<p>Beat signal spectrum at visibility from 5 m to 16 m in smoke scene.</p>
Full article ">Figure 27
<p>Growth rate of target echo signal amplitude at different visibilities.</p>
Full article ">
19 pages, 3482 KiB  
Article
Power Allocation Scheme for Multi-Static Radar to Stably Track Self-Defense Jammers
by Gangsheng Zhang, Junwei Xie, Haowei Zhang, Weike Feng, Mingjie Liu and Cong Qin
Remote Sens. 2024, 16(15), 2699; https://doi.org/10.3390/rs16152699 - 23 Jul 2024
Viewed by 696
Abstract
Due to suppression jamming by jammers, the signal-to-interference-plus-noise ratio (SINR) during tracking tasks is significantly reduced, thereby decreasing the target detection probability of radar systems. This may result in the interruption of the target track. To address this issue, we propose a multi-static [...] Read more.
Due to suppression jamming by jammers, the signal-to-interference-plus-noise ratio (SINR) during tracking tasks is significantly reduced, thereby decreasing the target detection probability of radar systems. This may result in the interruption of the target track. To address this issue, we propose a multi-static radar power allocation algorithm that enhances the detection and tracking performance of multiple radars in relation to their targets by optimizing power resource allocation. Initially, the echo signal model and measurement model of multi-static radar are formulated, followed by the derivation of the Bayesian Cramér–Rao lower bound (BCRLB). The multi-objective optimization method is utilized to establish the objective function for joint tracking and detection, with dynamic adjustment of the weight coefficient to balance the tracking and detection performance of multiple radars. This ensures the reliability and anti-jamming capability of the multi-static radar system. Simulation results indicate that the proposed algorithm can prevent the interruption of jammer tracking and maintain robust tracking performance. Full article
(This article belongs to the Section Engineering Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>The flowchart of the proposed method.</p>
Full article ">Figure 2
<p>The joint configuration of the multi-static radar system and moving jammers.</p>
Full article ">Figure 3
<p>The BCRLB of four algorithms.</p>
Full article ">Figure 4
<p>The detection performance of the UPA algorithm: (<b>a</b>) jammer 1; (<b>b</b>) jammer 2.</p>
Full article ">Figure 5
<p>The detection performance of the T-OPT algorithm: (<b>a</b>) jammer 1; (<b>b</b>) jammer 2.</p>
Full article ">Figure 6
<p>The detection performance of the D-OPT algorithm: (<b>a</b>) jammer 1; (<b>b</b>) jammer 2.</p>
Full article ">Figure 7
<p>The detection performance of the proposed algorithm: (<b>a</b>) jammer 1; (<b>b</b>) jammer 2.</p>
Full article ">Figure 8
<p>The power allocation results of four algorithms: (<b>a</b>) UPA algorithm; (<b>b</b>) T-OPT algorithm; (<b>c</b>) D-OPT algorithm; (<b>d</b>) the proposed algorithm.</p>
Full article ">Figure 9
<p>The second target reflectivity model.</p>
Full article ">Figure 10
<p>The BCRLB of four algorithms in the case of RCS fluctuation.</p>
Full article ">Figure 11
<p>The detection performance of the UPA algorithm in the case of RCS fluctuation: (<b>a</b>) jammer 1; (<b>b</b>) jammer 2.</p>
Full article ">Figure 12
<p>The detection performance of the Tracking-OPT algorithm in the case of RCS fluctuation: (<b>a</b>) jammer 1; (<b>b</b>) jammer 2.</p>
Full article ">Figure 13
<p>The detection performance of the Detection-OPT algorithm in the case of RCS fluctuation: (<b>a</b>) jammer 1; (<b>b</b>) jammer 2.</p>
Full article ">Figure 14
<p>The detection performance of the proposed algorithm in the case of RCS fluctuation: (<b>a</b>) jammer 1; (<b>b</b>) jammer 2.</p>
Full article ">Figure 15
<p>The power allocation results of four algorithms in the case of RCS fluctuation: (<b>a</b>) UPA algorithm; (<b>b</b>) T-OPT algorithm; (<b>c</b>) D-OPT algorithm; (<b>d</b>) the proposed algorithm.</p>
Full article ">
24 pages, 13925 KiB  
Article
Millimeter-Wave Radar Detection and Localization of a Human in Indoor Complex Environments
by Zhixuan Xing, Penghui Chen, Jun Wang, Yujing Bai, Jinhao Song and Liuyang Tian
Remote Sens. 2024, 16(14), 2572; https://doi.org/10.3390/rs16142572 - 13 Jul 2024
Viewed by 1449
Abstract
Nowadays, it is still a great challenge to detect and locate indoor humans using a frequency-modulated continuous-wave radar accurately. Due to the interference of the indoor environment and complex objects such as green plants, the radar signal may penetrate, reflect, refract, and scatter, [...] Read more.
Nowadays, it is still a great challenge to detect and locate indoor humans using a frequency-modulated continuous-wave radar accurately. Due to the interference of the indoor environment and complex objects such as green plants, the radar signal may penetrate, reflect, refract, and scatter, and the echo signals will contain noise, clutter, and multipath of different characteristics. Therefore, a method combined with comprehensive non-target signal removal and human localization is proposed to achieve position estimation of a human target. Time-variant clutter is innovatively mitigated through time accumulation using point clustering. Ghost targets are reduced according to propagation path matching. The experimental results show that the method can locate the real target human within an average error of 0.195 m in multiple complex environments with green plants, curtains, or furniture using a 77 GHz millimeter-wave radar. Meanwhile, the proposed method performs better than conventional methods. The detection probability is 81.250% when the human is behind a potted plant and is 90.286% when beside it. Full article
(This article belongs to the Special Issue State-of-the-Art and Future Developments: Short-Range Radar)
Show Figures

Figure 1

Figure 1
<p>The overall structure design of the human detection and localization system.</p>
Full article ">Figure 2
<p>The signal processing overall system.</p>
Full article ">Figure 3
<p>The overall design for noises, clutter, and multipath false target filtering.</p>
Full article ">Figure 4
<p>Implementation Process of Peak Search.</p>
Full article ">Figure 5
<p>Multipath propagation paths: (<b>a</b>) single reflector; (<b>b</b>) two reflectors.</p>
Full article ">Figure 6
<p>A radar signal propagation schematic.</p>
Full article ">Figure 7
<p>Two measurement conditions: (<b>a</b>) behind a plant; (<b>b</b>) beside a plant.</p>
Full article ">Figure 8
<p>HRRP comparison: (<b>a</b>) HRRP before noise filtering (back); (<b>b</b>) HRRP before noise filtering (side).</p>
Full article ">Figure 9
<p>HRRP Comparison: (<b>a</b>) HRRP after noise filtering (back); (<b>b</b>) HRRP after noise filtering (side).</p>
Full article ">Figure 10
<p>HRRP comparison: (<b>a</b>) RD after 2D-FFT (back); (<b>b</b>) RD after 2D-FFT (side).</p>
Full article ">Figure 11
<p>Detection point distribution of near a plant: (<b>a</b>) After Classification (back); (<b>b</b>) After Classification (side).</p>
Full article ">Figure 12
<p>Target Point Distribution: (<b>a</b>) Targets behind the Plant; (<b>b</b>) Targets beside the Plant.</p>
Full article ">Figure 13
<p>Four measurement scenarios: (<b>a</b>) behind a new plant; (<b>b</b>) beside a new plant; (<b>c</b>) behind two new plants; (<b>d</b>) beside two new plants.</p>
Full article ">Figure 14
<p>HRRP comparison: (<b>a</b>) HRRP after noise filtering for one plant (back); (<b>b</b>) HRRP after noise filtering for one plant (side); (<b>c</b>) HRRP after noise filtering for two plants (back); (<b>d</b>) HRRP after noise filtering for two plants (side).</p>
Full article ">Figure 14 Cont.
<p>HRRP comparison: (<b>a</b>) HRRP after noise filtering for one plant (back); (<b>b</b>) HRRP after noise filtering for one plant (side); (<b>c</b>) HRRP after noise filtering for two plants (back); (<b>d</b>) HRRP after noise filtering for two plants (side).</p>
Full article ">Figure 15
<p>Detection point distribution: (<b>a</b>) after classification for one plant (back); (<b>b</b>) after classification for one plant (side); (<b>c</b>) after classification for two plants (back); (<b>d</b>) after classification for two plants (side).</p>
Full article ">Figure 16
<p>Target point distribution: (<b>a</b>) targets behind one plant; (<b>b</b>) targets beside one plant; (<b>c</b>) targets behind two plants; (<b>d</b>) targets beside two plants.</p>
Full article ">Figure 17
<p>Two measurement scenarios: (<b>a</b>) beside curtains; (<b>b</b>) behind furniture.</p>
Full article ">Figure 18
<p>HRRP comparison after noise filtering: (<b>a</b>) HRRP (beside curtains); (<b>b</b>) HRRP (behind furniture).</p>
Full article ">Figure 19
<p>Detection point distribution of complex environments: (<b>a</b>) after classification (back); (<b>b</b>) after classification (side).</p>
Full article ">Figure 20
<p>Target point distribution: (<b>a</b>) targets beside curtains; (<b>b</b>) targets behind furniture.</p>
Full article ">Figure 21
<p>Average error comparison in different environments.</p>
Full article ">
32 pages, 3781 KiB  
Article
Spatial Simultaneous Functioning-Based Joint Design of Communication and Sensing Systems in Wireless Channels
by Pham Ngoc Luat, Attaphongse Taparugssanagorn, Kamol Kaemarungsi and Chatchamon Phoojaroenchanachai
Appl. Sci. 2024, 14(12), 5319; https://doi.org/10.3390/app14125319 - 20 Jun 2024
Cited by 1 | Viewed by 1252
Abstract
This paper advocates for spatial simultaneous functioning (SSF) over time division multiple access (TDMA) in joint communication and sensing (JCAS) scenarios for improved resource utilization and reduced interference. SSF enables the concurrent operation of communication and sensing systems, enhancing flexibility and efficiency, especially [...] Read more.
This paper advocates for spatial simultaneous functioning (SSF) over time division multiple access (TDMA) in joint communication and sensing (JCAS) scenarios for improved resource utilization and reduced interference. SSF enables the concurrent operation of communication and sensing systems, enhancing flexibility and efficiency, especially in dynamic environments. The study introduces joint design communication and sensing scenarios for single input single output (SISO) and multiple input multiple output (MIMO) JCAS receivers. An MIMO-JCAS base station (BS) is proposed to process downlink communication signals and echo signals from targets simultaneously using interference cancellation techniques. We evaluate the communication performance and sensing estimation across both Rayleigh and measured realistic channels. Additionally, a deep neural network (DNN)-based approach for channel estimation and signal detection in JCAS systems is presented. The DNN outperforms the traditional methods in the bit error rate (BER) versus signal-to-noise ratio (SNR) curves, leveraging its ability to learn complex patterns autonomously. The DNN’s training process fine-tunes the performance based on specific problem characteristics, capturing the nuanced relationships within data and adapting to varying SNR conditions for consistently superior performance compared to the traditional approaches. Full article
Show Figures

Figure 1

Figure 1
<p>A basic JCAS scenario [<a href="#B2-applsci-14-05319" class="html-bibr">2</a>].</p>
Full article ">Figure 2
<p>JCAS downlink communication scenario.</p>
Full article ">Figure 3
<p>The architecture of the OFDM system with deep learning-based channel estimation and signal detection.</p>
Full article ">Figure 4
<p>Experiment setup for measuring channel.</p>
Full article ">Figure 5
<p>S21, CFR, CIR, and averaged PDP of the channel in the corridor.</p>
Full article ">Figure 6
<p>S21, CFR, CIR, and averaged PDP of the channel in the meeting room.</p>
Full article ">Figure 7
<p>Communication and sensing performance using different numbers of receive antennas: (<b>a</b>) Communication performance, (<b>b</b>) Sensing performance.</p>
Full article ">Figure 8
<p>Communication and sensing performance using different numbers of receive antennas for the realistic channel: (<b>a</b>) Communication performance, (<b>b</b>): Sensing performance.</p>
Full article ">Figure 9
<p>Communication and sensing performance under different velocities of a UE: (<b>a</b>) Communication performance, (<b>b</b>): Sensing performance.</p>
Full article ">Figure 10
<p>Communication performance of SSF D-JCAS and DCAS: (<b>a</b>) number of receive antennas, <math display="inline"><semantics> <msub> <mi>N</mi> <mi>r</mi> </msub> </semantics></math> = 1; (<b>b</b>) number of receive antennas, <math display="inline"><semantics> <msub> <mi>N</mi> <mi>r</mi> </msub> </semantics></math> = 2.</p>
Full article ">Figure 11
<p>Sensing performance of SSF D-JCAS and DCAS: (<b>a</b>) number of receive antennas, <math display="inline"><semantics> <msub> <mi>N</mi> <mi>r</mi> </msub> </semantics></math> = 1; (<b>b</b>) number of receive antennas, <math display="inline"><semantics> <msub> <mi>N</mi> <mi>r</mi> </msub> </semantics></math> = 2.</p>
Full article ">Figure 12
<p>Training progress of DNN regression for 100,000 frames.</p>
Full article ">Figure 13
<p>BERs of DNN regression and MMSE approach for 100,000 frames.</p>
Full article ">Figure 14
<p>Training progress of LSTM method for 10,000 frames.</p>
Full article ">Figure 15
<p>BERs of LSTM method and MMSE approach for 10,000 frames.</p>
Full article ">Figure 16
<p>Range and velocity RMSEs over SNR for the traditional and DNN approaches with range = 30 m and velocity = 30 m/s.</p>
Full article ">Figure 17
<p>Range and velocity RMSEs over SNR for the traditional and DNN approaches with range = 305 m and velocity = 140 m/s.</p>
Full article ">Figure 18
<p>Communication and sensing performance using different approaches with 2 receive antennas.</p>
Full article ">Figure 19
<p>Communication and sensing performance using DL approaches for both SSF D-JCAS and DCAS.</p>
Full article ">
17 pages, 1308 KiB  
Article
Real-Time Three-Dimensional Tracking of Distant Moving Objects Using Non-Imaging Single-Pixel LiDAR
by Zijun Guo, Zixin He, Runbo Jiang, Zhicai Li, Huiling Chen, Yingjian Wang and Dongfeng Shi
Remote Sens. 2024, 16(11), 1924; https://doi.org/10.3390/rs16111924 - 27 May 2024
Cited by 2 | Viewed by 1125
Abstract
The real-time tracking of moving objects has extensive applications in various domains. Existing tracking methods typically utilize video image processing, but their performance is limited due to the high information throughput and computational requirements associated with processing continuous images. Additionally, imaging in certain [...] Read more.
The real-time tracking of moving objects has extensive applications in various domains. Existing tracking methods typically utilize video image processing, but their performance is limited due to the high information throughput and computational requirements associated with processing continuous images. Additionally, imaging in certain spectral bands can be costly. This paper proposes a non-imaging real-time three-dimensional tracking technique for distant moving targets using single-pixel LiDAR. This novel approach involves compressing scene information from three-dimensional to one-dimensional space using spatial encoding modulation and then obtaining this information through single-pixel detection. A LiDAR system is constructed based on this method, where the peak position of the detected full-path one-dimensional echo signal is used to obtain the target distance, while the peak intensity is used to obtain the azimuth and pitch information of the moving target. The entire process requires minimal data collection and a low computational load, making it feasible for the real-time three-dimensional tracking of single or multiple moving targets. Outdoor experiments confirmed the efficacy of the proposed technology, achieving a distance accuracy of 0.45 m and an azimuth and pitch angle accuracy of approximately 0.03° in localizing and tracking a flying target at a distance of 3 km. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Structure and optical path diagram of the LiDAR system. (<b>a</b>) The system features three main components: the laser system, a Schmidt–Cassegrain telescope, and a dual-channel optical reception system. (<b>b</b>) Schematic diagram of the system’s optical path. The light emitted by the laser system illuminates the object under measurement, and the returned light, collected by the telescope, is projected onto the DMD. After modulation with the DMD, one path is directed to the APD to obtain the total intensity, while the other path is received and imaged using the CCD.</p>
Full article ">Figure 2
<p>The principle of projection curve measurement. (<b>a</b>) The projection curves of a slice image of the 3D scene. (<b>b</b>) The generation process of modulation patterns, where each row and column of the Hadamard matrix is utilized to construct a modulation pattern. For example, the first row of the Hadamard matrix <span class="html-italic">S<sub>x</sub></span><sub>,1</sub> is used to construct the modulation pattern <span class="html-italic">S</span>′<span class="html-italic"><sub>x</sub></span><sub>,1</sub>(<span class="html-italic">x</span>,<span class="html-italic">y</span>), and the first column <span class="html-italic">S</span><sub>1,<span class="html-italic">y</span></sub> is used to construct the modulation pattern <span class="html-italic">S</span>′<span class="html-italic"><sub>y</sub></span><sub>,1</sub>(<span class="html-italic">x,y</span>). The specific construction method involves setting each row of <span class="html-italic">S</span>′<span class="html-italic"><sub>x</sub></span><sub>,1</sub>(<span class="html-italic">x,y</span>) equal to <span class="html-italic">S<sub>x</sub></span><sub>,1</sub> and each column of <span class="html-italic">S</span>′<span class="html-italic"><sub>y</sub></span><sub>,1</sub>(<span class="html-italic">x</span>,<span class="html-italic">y</span>) equal to <span class="html-italic">S</span><sub>1,<span class="html-italic">y</span></sub>.</p>
Full article ">Figure 3
<p>Overview of the detection and processing process; (<b>a</b>) time waveform of the system-emitted pulse laser; (<b>b</b>) drones distributed at different distances in the scene; (<b>c</b>) echo signal received by the detector, with discrete sampling of the signal. The distance information of the target can be calculated based on the time corresponding to the peak value, and the azimuth and pitch information of the target can be calculated based on the peak value; (<b>d</b>) projection curves of the target scene calculated using the intensity of the peak signal. The maximum point of the projection curve corresponds to the position of the target in the field of view.</p>
Full article ">Figure 4
<p>Schematic diagram of the experimental system. (<b>a</b>) The design of the experimental system, where the LiDAR system detects drones in the field of view. The APD output analog signal is discretely collected by the DAS and transmitted to the computer for processing. The image captured by the CCD camera is then transmitted in real time to the computer for display. (<b>b</b>) The drones used in the experiment, with a drone height of 0.3 m and width of 0.35 m. (<b>c</b>) The placement of the LiDAR system and the area of the drones’ flight, with a building complex behind the drones, represented in the LiDAR detection signal as a larger echo intensity.</p>
Full article ">Figure 5
<p>Distance detection results; (<b>a</b>,<b>b</b>) two flight trajectories, with the <span class="html-italic">x</span>-axis representing time and the <span class="html-italic">y</span>-axis representing distance. The blue dashed line represents the system’s measured distance values, while the red dashed line represents the recorded drone distances.</p>
Full article ">Figure 6
<p>Single-target positioning measurement results; (<b>a</b>,<b>b</b>) two flight trajectories, with the <span class="html-italic">x</span>-axis representing time and the <span class="html-italic">y</span>-axis representing the pitch position of the drone in the field of view. The black dashed line represents the true pitch position obtained from the drone’s flight record, while the red dashed line represents the pitch position measured with the LiDAR system.</p>
Full article ">Figure 7
<p>Three-dimensional tracking results of a single target at 3 km: (<b>a</b>) the target position obtained at a certain moment during the measurement process; (<b>b</b>) image from the CCD at that moment; (<b>c</b>) the approximate flight trajectory of the drone moving in the horizontal, vertical, and longitudinal directions; (<b>d</b>) the continuous three-dimensional drone flight trajectory obtained with the LiDAR system. To make the distance changes more distinguishable with a three-dimensional trajectory, the color of the trajectory lines changes gradually with distance.</p>
Full article ">Figure 8
<p>Results of LiDAR multi-target tracking. (<b>a1</b>–<b>f1</b>) Sequential display of multiple detection frames during the tracking process, showing the three-dimensional information of the drones obtained with the LiDAR system. The <span class="html-italic">x</span>-axis represents the horizontal position of the targets in the field of view using the azimuth angle in degrees, and the <span class="html-italic">y</span>-axis represents the vertical position using the pitch angle in degrees. Distances are marked on the graph, assisted by a color bar. (<b>a2</b>–<b>f2</b>) Sequential display of multiple detection frames during the tracking process, showing the images obtained via the CCD camera, with drone positions highlighted using red boxes.</p>
Full article ">Figure 9
<p>Results of LiDAR system’s multi-target three-dimensional detection. (<b>a</b>–<b>d</b>) The three-dimensional motion trajectories of multiple targets within the field of view. The three axes represent the horizontal position, pitch position, and distance of the targets. The azimuth angle is used for the horizontal position and pitch angle for the pitch position; the distance is in meters.</p>
Full article ">
Back to TopTop