Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (120)

Search Parameters:
Keywords = image defocusing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 1981 KiB  
Article
Characterization of Defocused Coherent Imaging Systems with Periodic Objects
by Gianlorenzo Massaro and Milena D’Angelo
Sensors 2024, 24(21), 6885; https://doi.org/10.3390/s24216885 - 26 Oct 2024
Viewed by 770
Abstract
Recent advancements in quantum and quantum-inspired imaging techniques have enabled high-resolution 3D imaging through photon correlations. These techniques exhibit reduced degradation of image resolution for out-of-focus samples compared to conventional methods (i.e., intensity-based incoherent imaging). A key advantage of these correlation-based approaches is [...] Read more.
Recent advancements in quantum and quantum-inspired imaging techniques have enabled high-resolution 3D imaging through photon correlations. These techniques exhibit reduced degradation of image resolution for out-of-focus samples compared to conventional methods (i.e., intensity-based incoherent imaging). A key advantage of these correlation-based approaches is their independence from the system numerical aperture (NA). Interestingly, both improved resolution of defocused images and NA-independent scaling are linked to the spatial coherence of light. This suggests that while correlation measurements exploit spatial coherence, they are not essential for achieving this imaging advantage. This discovery has led to the development of optical systems that achieve similar performance by using spatially coherent illumination and relying on intensity measurements: direct 3D imaging with NA-independent resolution was recently demonstrated in a correlation-free setup using LED light. Here, we explore the physics behind the enhanced performance of defocused coherent imaging, showing that it arises from the modification of the sample’s spatial harmonic content due to diffraction, unlike the blurring seen in conventional imaging. The results we present are crucial for understanding the implications of the physical differences between coherent and incoherent imaging, and are expected to pave the way for the practical application of the discovered phenomena. Full article
(This article belongs to the Special Issue Imaging and Sensing in Optics and Photonics)
Show Figures

Figure 1

Figure 1
<p>General scheme of the imaging device.</p>
Full article ">Figure 2
<p>Comparison between the incoherent images (<b>left</b>) and coherent images (<b>right</b>) for a range of defocusing <math display="inline"><semantics> <mi>δ</mi> </semantics></math> (vertical axis) from <math display="inline"><semantics> <mrow> <mo>−</mo> <mn>7</mn> </mrow> </semantics></math> mm to <math display="inline"><semantics> <mrow> <mo>+</mo> <mn>7</mn> </mrow> </semantics></math> mm. The plots are obtained by assuming a spatial frequency of <math display="inline"><semantics> <mrow> <mi>ν</mi> <mo>=</mo> <mn>20</mn> </mrow> </semantics></math> lines/mm, illumination at <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>500</mn> </mrow> </semantics></math> nm and NA <math display="inline"><semantics> <mrow> <mo>=</mo> <mn>0.05</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Comparison between the incoherent images ((<b>upper</b>) panels) and coherent images ((<b>lower</b>) panels) for three different spatial frequencies <math display="inline"><semantics> <mi>ν</mi> </semantics></math> and three different defocusing values <math display="inline"><semantics> <mi>δ</mi> </semantics></math>. The three spatial frequencies are <math display="inline"><semantics> <mrow> <mi>ν</mi> <mo>=</mo> <mn>1</mn> <mspace width="0.166667em"/> </mrow> </semantics></math> cycles/mm (<b>left</b>), <math display="inline"><semantics> <mrow> <mn>10</mn> <mspace width="0.166667em"/> </mrow> </semantics></math> cycles/mm (<b>middle</b>), and <math display="inline"><semantics> <mrow> <mn>20</mn> <mspace width="0.166667em"/> </mrow> </semantics></math> cycles/mm (<b>right</b>). In each panel, three different sample displacements are represented, corresponding to focus (black), <math display="inline"><semantics> <mrow> <mo>+</mo> <mn>1</mn> </mrow> </semantics></math> mm (red), and <math display="inline"><semantics> <mrow> <mo>+</mo> <mn>3</mn> </mrow> </semantics></math> mm (blue).</p>
Full article ">Figure 4
<p>Comparison between the coherent and incoherent MTF as a function of the spatial frequency of the sinusoidal input for a 1-mm defocusing ((<b>left</b>) panel) and no defocusing ((<b>right</b>) panel). The solid lines represent the MTF, as obtained with the same NA as <a href="#sensors-24-06885-f003" class="html-fig">Figure 3</a>, while the dashed lines are obtained with an halved NA of <math display="inline"><semantics> <mrow> <mn>0.025</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>Comparison between the DOF of incoherent and coherent imaging for <math display="inline"><semantics> <mrow> <mi>ν</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> lines/mm ((<b>right</b>) panel) and 10 lines/mm ((<b>left</b>) panel). The dashed lines identifying the DOF are obtained for a tolerance <math display="inline"><semantics> <mrow> <mi>c</mi> <mo>=</mo> <mn>20</mn> <mo>%</mo> </mrow> </semantics></math>.</p>
Full article ">
16 pages, 7515 KiB  
Article
Maneuvering Trajectory Synthetic Aperture Radar Processing Based on the Decomposition of Transfer Functions in the Frequency Domain Using Average Blurred Edge Width Assessment
by Chenguang Yang, Duo Wang, Fukun Sun and Kaizhi Wang
Electronics 2024, 13(20), 4100; https://doi.org/10.3390/electronics13204100 - 17 Oct 2024
Viewed by 512
Abstract
With the rapid development of synthetic aperture radar (SAR), delivery platforms are gradually becoming diversified and miniaturized. The SAR flight process is susceptible to external influences, resulting in unsatisfactory imaging results, so it is necessary to optimize imaging processing in combination with the [...] Read more.
With the rapid development of synthetic aperture radar (SAR), delivery platforms are gradually becoming diversified and miniaturized. The SAR flight process is susceptible to external influences, resulting in unsatisfactory imaging results, so it is necessary to optimize imaging processing in combination with the SAR imaging quality assessment (IQA) index. Based on the principle of SAR imaging, this paper analyzes the impact of defocusing on imaging results caused by mismatched filters and draws on the assessment algorithm of motion blur, proposing a SAR IQA index based on average blurred edge width (ABEW) in the salient area. In addition, the idea of decomposing the transfer function in the frequency domain and fitting the matched filter with a polynomial is also proposed. The estimation of the flight trajectory is changed to a correction of the matched filter, avoiding the precise estimation of Doppler parameters and complex calculations during the time–frequency conversion process. The effectiveness of ABEW was verified by using SAR images of real scenes, and the results were highly consistent with the actual image quality. The imaging processing was tested using the echo signals generated by the errors introduced during the flight process, and more satisfactory imaging results were obtained by using ABEW with the filter for correction. The imaging process was tested using the echo signal generated by introducing errors during the flight, and the filter was corrected using ABEW as an index, obtaining a comparatively ideal imaging result. Full article
(This article belongs to the Special Issue Radar Signal Processing Technology)
Show Figures

Figure 1

Figure 1
<p>The impact of azimuth frequency modulation rate error on point targets. The percentages in the subtitles refer to the deviation of <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>K</mi> </mrow> </semantics></math> relative to <math display="inline"><semantics> <mrow> <msub> <mi>K</mi> <mi>a</mi> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 2
<p>The effect of using a mismatched filter on imaging results in real-scene SAR echo imaging processing. The deviation of <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>K</mi> </mrow> </semantics></math> relative to <math display="inline"><semantics> <mrow> <msub> <mi>K</mi> <mi>a</mi> </msub> </mrow> </semantics></math> in (<b>b</b>) is 4%.</p>
Full article ">Figure 3
<p>Extracting salient area and BEPs from original SAR image. (<b>a</b>) is the real scene SAR image, (<b>b</b>) is the result obtained by extracting salient area from (<b>a</b>), and (<b>c</b>) is the result obtained by extracting BEPs from (<b>b</b>).</p>
Full article ">Figure 4
<p>The azimuth profile of a BEP. Point B is a BEP extracted from the salient area of the SAR image, and points A and C are the two endpoints of point B that have a monotonic relationship along the azimuth direction.</p>
Full article ">Figure 5
<p>The influence of the errors in each coefficient on the quality of the imaging result.</p>
Full article ">Figure 6
<p>Real-scene SAR images under different frequency modulation rate errors.</p>
Full article ">Figure 7
<p>Comparison of InEn, variance, and ABEW.</p>
Full article ">Figure 8
<p>The ABEW value of the processing result when the single-point target SAR echo is processed using a filter with the coefficient correction item by item.</p>
Full article ">Figure 9
<p>The comparison of the processing results before and after the correction of filter coefficients for single point target SAR echo data. (<b>a1</b>–<b>c1</b>) are the imaging results of the single-point target, the range profile, and the azimuth profile when the ideal flight parameters are used for the filter coefficients. (<b>a2</b>–<b>c2</b>) are the imaging results of the single-point target, the range profile, and the azimuth profile when the corrected coefficients are used as the filter coefficients.</p>
Full article ">Figure 10
<p>The comparison of the processing results before and after the correction of filter coefficients for real scene SAR echo data. (<b>a1</b>,<b>a2</b>) are the imaging results when the ideal flight parameters are used for the filter coefficients, and (<b>b1</b>,<b>b2</b>) are the imaging results when the corrected coefficients are used as the filter coefficients.</p>
Full article ">
25 pages, 16886 KiB  
Article
A Multiple Targets ISAR Imaging Method with Removal of Micro-Motion Connection Based on Joint Constraints
by Hongxu Li, Qinglang Guo, Zihan Xu, Xinfei Jin, Fulin Su and Xiaodi Li
Remote Sens. 2024, 16(19), 3647; https://doi.org/10.3390/rs16193647 - 29 Sep 2024
Viewed by 706
Abstract
Combining multiple data sources, Digital Earth is an integrated observation platform based on air–space–ground–sea monitoring systems. Among these data sources, the Inverse Synthetic Aperture Radar (ISAR) is a crucial observation method. ISAR is typically utilized to monitor both military and civilian ships due [...] Read more.
Combining multiple data sources, Digital Earth is an integrated observation platform based on air–space–ground–sea monitoring systems. Among these data sources, the Inverse Synthetic Aperture Radar (ISAR) is a crucial observation method. ISAR is typically utilized to monitor both military and civilian ships due to its all-day and all-weather superiority. However, in complex scenarios, multiple targets may exist within the same radar antenna beam, resulting in severe defocusing due to different motion conditions. Therefore, this paper proposes a multiple-target ISAR imaging method with the removal of micro-motion connections based on the integration of joint constraints. The fully motion-compensated targets exhibit low rank and local similarity in the high-resolution range profile (HRRP) domain, while the micro-motion components possess sparsity. Additionally, targets display sparsity in the image domain. Inspired by this, we formulate a novel optimization by promoting the low-rank, the Laplacian, and the sparsity constraints of targets and the sparsity constraints of the micro-motion components. This optimization problem is solved by the linearized alternative direction method with adaptive penalty (LADMAP). Furthermore, the different motions of various targets degrade their inherent characteristics. Therefore, we integrate motion compensation transformation into the optimization, accordingly achieving the separation of rigid bodies and the micro-motion components of different targets. Experiments based on simulated data demonstrate the effectiveness of the proposed method. Full article
Show Figures

Figure 1

Figure 1
<p>The imaging geometry of multiple targets.</p>
Full article ">Figure 2
<p>The imaging geometry of the micro-motion components.</p>
Full article ">Figure 3
<p>Range profiles and the imaging results of the ridge body and the micro-motion components. (<b>a</b>) Range profiles of the ridge body. (<b>b</b>) Imaging result of the ridge body. (<b>c</b>) Range profiles of the micro-motion components. (<b>d</b>) Imaging result of the micro-motion components.</p>
Full article ">Figure 4
<p>Range profiles and the imaging results of the ridge body and the micro-motion components. (<b>a</b>) Correlation coefficients of the ridge body. (<b>b</b>) Correlation coefficients of 256th pulse of the ridge body. (<b>c</b>) Correlation coefficients of the micro-motion components. (<b>d</b>) Correlation coefficients of 256th pulse of the micro-motion components.</p>
Full article ">Figure 5
<p>The procedure of the proposed multi-target separation method.</p>
Full article ">Figure 6
<p>The scatterer model and antenna model of targets.</p>
Full article ">Figure 7
<p>Range profiles and their imaging results. (<b>a</b>) Range profiles of targets. (<b>b</b>) Imaging result of targets.</p>
Full article ">Figure 8
<p>The micro-motion components and their Radon transform results. (<b>a</b>) Filtered micro-motion components. (<b>b</b>) The Radon transform of the filtered components.</p>
Full article ">Figure 9
<p>Results of the coarse separation. (<b>a</b>) Coarse range profiles of target1. (<b>b</b>) Coarse range profiles of target2. (<b>c</b>) Coarse imaging result of target1. (<b>d</b>) Coarse imaging result of target1. (<b>e</b>) Enlarged view of target1. (<b>f</b>) Enlarged view of target2.</p>
Full article ">Figure 10
<p>Aligned range profiles and their enlarged views. (<b>a</b>) Aligned coarse range profiles of target1. (<b>b</b>) Enlarged view of coarse range profiles of target1. (<b>c</b>) Aligned coarse range profiles of target1. (<b>d</b>) Enlarged view of coarse range profiles of target2.</p>
Full article ">Figure 11
<p>Results of accurate separation. (<b>a</b>) Accurate range profiles of target1. (<b>b</b>) Accurate range profiles of target2. (<b>c</b>) Accurate imaging result of target1. (<b>d</b>) Accurate imaging result of target2.</p>
Full article ">Figure 12
<p>Range profiles of micro-motion components of targets and their auto-correlation results. (<b>a</b>) Aligned range profiles of micro-motion components of target1. (<b>b</b>) The auto-correlation result of micro-motion components of target1. (<b>c</b>) Aligned range profiles of micro-motion components of target2. (<b>d</b>) The auto-correlation result of micro-motion components of target2.</p>
Full article ">Figure 13
<p>The separation results of different methods and the proposed method. (<b>a</b>) The separation result using the TF-based method. (<b>b</b>) The separation result using the segmentation-based method. (<b>c</b>) The separation result using the parameter-based method. (<b>d</b>) The separation result using the proposed method.</p>
Full article ">Figure 14
<p>The separation results under different SNR conditions. (<b>a</b>) The separation result under the −5 dB SNR condition. (<b>b</b>) The separation result under the 0 dB SNR condition. (<b>c</b>) The separation result under the 5 dB SNR condition. (<b>d</b>) The separation result using the proposed method.</p>
Full article ">Figure 15
<p>The curve related to the separation contrast and the initial value of <math display="inline"><semantics> <msub> <mi mathvariant="bold">Y</mi> <mi mathvariant="bold">d</mi> </msub> </semantics></math>.</p>
Full article ">Figure 16
<p>The variation in the value of the cost function.</p>
Full article ">
14 pages, 3649 KiB  
Article
A Rapid Nanofocusing Method for a Deep-Sea Gene Sequencing Microscope Based on Critical Illumination
by Ming Gao, Fengfeng Shu, Wenchao Zhou, Huan Li, Yihui Wu, Yue Wang, Shixun Zhao and Zihan Song
Sensors 2024, 24(15), 5010; https://doi.org/10.3390/s24155010 - 2 Aug 2024
Viewed by 887
Abstract
In the deep-sea environment, the volume available for an in-situ gene sequencer is severely limited. In addition, optical imaging systems are subject to real-time, large-scale defocusing problems caused by ambient temperature fluctuations and vibrational perturbations. To address these challenges, we propose an edge [...] Read more.
In the deep-sea environment, the volume available for an in-situ gene sequencer is severely limited. In addition, optical imaging systems are subject to real-time, large-scale defocusing problems caused by ambient temperature fluctuations and vibrational perturbations. To address these challenges, we propose an edge detection algorithm for defocused images based on grayscale gradients and establish a defocus state detection model with nanometer resolution capabilities by relying on the inherent critical illumination light field. The model has been applied to a prototype deep-sea gene sequencing microscope with a 20× objective. It has demonstrated the ability to focus within a dynamic range of ±40 μm with an accuracy of 200 nm by a single iteration within 160 ms. By increasing the number of iterations and exposures, the focusing accuracy can be refined to 78 nm within a dynamic range of ±100 μm within 1.2 s. Notably, unlike conventional photoelectric hill-climbing, this method requires no additional hardware and meets the wide dynamic range, speed, and high-accuracy autofocusing requirements of deep-sea gene sequencing in a compact form factor. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

Figure 1
<p>Critical illumination fluorescence microscopy imaging system. CO is a collimator. MF is a multimode optical fiber with a square output section. De-Speckler is a speckle-beam homogenizer.</p>
Full article ">Figure 2
<p>The principle of critical illumination focuses on feedback. (<b>a</b>) The optical path for the infinite-range microscope. The red and green lines represent the propagation path of light rays at the edge of the field of view. (<b>b</b>) Simulation of square laser illumination output end-face field distribution. (<b>c</b>) Intensity and gradient simulation distribution at the line position in (<b>b</b>). (<b>d</b>) The gradient distributions T<sub>1</sub>, T<sub>2</sub> and the positional difference of the gradient extrema D<sub>1</sub>, D<sub>2</sub> before and after the displacement of the light field.</p>
Full article ">Figure 3
<p>Simulation of the defocused critical illumination light field. (<b>a</b>) Conjugate images of the excitation field at five defocused positions. (<b>b</b>) One−dimensional intensity curves in the x-direction of the conjugate images of the excitation field at five defocused positions. (<b>c</b>) One−dimensional gradient curves in the x−cut direction of the conjugate images of the excitation field at five defocused positions.</p>
Full article ">Figure 4
<p>Simulation of standard evaluation curves for defocus distances. (<b>a</b>) The black line is the normalized standard defocus distance evaluation curve, plotted using the gradient extremes as the defocus evaluation value. The red line is the normalized standard defocus distance evaluation curve, plotted using the gradient extremum position difference as the defocus evaluation value. (<b>b</b>) The black line is the normalized standard defocus distance evaluation curve, plotted using the second derivative extremum as the defocus evaluation value. The red line is the normalized standard defocus distance evaluation curve, plotted using the second derivative extremum position difference as the defocus evaluation value. (<b>c</b>) The theoretical magnification of the system under defocusing and the actual magnification obtained from edge recognition. (<b>d</b>) Standard evaluation curves for different objective lens focal lengths. (<b>e</b>) Standard evaluation curves for different illumination widths.</p>
Full article ">Figure 5
<p>Prototype of a multichannel fluorescence microscope for gene sequencing. IPC: Industrial Personal Computer. Camera: Andor Zyla 4.2, Andor Technology, Belfast, UK. Illumination: semiconductor laser (Changchun New Industries, MDL-E-655, Changchun, China). Tube lens: Thorlabs TTL100-A, Thorlabs Inc., Newton, NJ, USA. Filter wheel: Thorlabs FW102C. Optical fiber: Changchun New Industries multimode optical fiber. Beamsplitter: Chroma ZT532/660rpc, MEETOPTICS, Barcelona, Spain. Microobjective: Olympus UCPLFLN20X (Olympus, Tokyo, Japan) with a numerical aperture of 0.7 and a magnification of 20×. Motion module: A one-dimensional linear displacement mechanism (WDI, ZAA-STD) was selected to provide a travel range of 10 cm with a minimum step displacement of 78 nm. The laser enters the color from right to left in the direction of the arrow and then shines downward. Reflected light propagates from bottom to top, with some continuing to propagate upward through the dichroic film.</p>
Full article ">Figure 6
<p>Extraction of standard evaluation curves for defocusing amounts. (<b>a</b>) Image of the excitation field of an equidistant defocused volume captured by the detector. (<b>b</b>) Standard evaluation curve of defocusing amount plotted at 5 μm intervals. (<b>c</b>) Standard evaluation curve of defocusing amount plotted at 1 μm intervals.</p>
Full article ">Figure 7
<p>Focus accuracy test experiment. (<b>a</b>) Capture single images under different defocus conditions and calculate the amount of defocus, then determine the deviation from the theoretical value. The red line represents deviations of 0.1 μm and −0.1 μm. (<b>b</b>) Step out of focus 78 nm multiple times during continuous detection and record the amount of out−of−focus in real time.</p>
Full article ">Figure 8
<p>Temperature experiments in a simulated deep-sea environment. (<b>a</b>) Deep-sea gene sequencer. (<b>b</b>) Focal plane shift due to ambient temperature changes in the pressure-resistant cavity of a deep-sea, in situ gene sequencer.</p>
Full article ">Figure 9
<p>Standard nucleotide fragment sequencing experiment. (<b>a</b>) Grayscale histogram of local imaging on the chip. (<b>b</b>) Extracted intensity histogram. (<b>c</b>) Energy values of the four spectral channels with a single cluster. (<b>d</b>) Quality score Q30 values for overall data.</p>
Full article ">
19 pages, 16379 KiB  
Article
A Novel Method for CSAR Multi-Focus Image Fusion
by Jinxing Li, Leping Chen, Daoxiang An, Dong Feng and Yongping Song
Remote Sens. 2024, 16(15), 2797; https://doi.org/10.3390/rs16152797 - 30 Jul 2024
Viewed by 570
Abstract
Circular synthetic aperture radar (CSAR) has attracted a lot of interest, recently, for its excellent performance in civilian and military applications. However, in CSAR imaging, the result is to be defocused when the height of an object deviates from a reference height. Existing [...] Read more.
Circular synthetic aperture radar (CSAR) has attracted a lot of interest, recently, for its excellent performance in civilian and military applications. However, in CSAR imaging, the result is to be defocused when the height of an object deviates from a reference height. Existing approaches for this problem rely on digital elevation models (DEMs) for error compensation. It is difficult and costly to collect DEM using specific equipment, while the inversion of DEM based on echo is computationally intensive, and the accuracy of results is unsatisfactory. Inspired by multi-focus image fusion in optical images, a spatial-domain fusion method is proposed based on the sum of modified Laplacian (SML) and guided filter. After obtaining CSAR images in a stack of different reference heights, an all-in-focus image can be computed by the proposed method. First, the SMLs of all source images are calculated. Second, take the rule of selecting the maximum value of SML pixel by pixel to acquire initial decision maps. Secondly, a guided filter is utilized to correct the initial decision maps. Finally, fuse the source images and decision maps to obtain the result. A comparative experiment has been processed to verify the exceptional performance of the proposed method. The final processing result of real-measured CSAR data demonstrated that the proposed method is effective and practical. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

Figure 1
<p>Geometric optics model. Object <math display="inline"><semantics> <mi>A</mi> </semantics></math> focuses on the film plane as a point <math display="inline"><semantics> <mi>a</mi> </semantics></math>, while <math display="inline"><semantics> <mi>b</mi> </semantics></math> and <math display="inline"><semantics> <mi>c</mi> </semantics></math> are blurred circles of <math display="inline"><semantics> <mi>B</mi> </semantics></math> and <math display="inline"><semantics> <mi>C</mi> </semantics></math>. <math display="inline"><semantics> <mrow> <msup> <mi>b</mi> <mo>′</mo> </msup> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msup> <mi>c</mi> <mo>′</mo> </msup> </mrow> </semantics></math> are the position of film plane when <math display="inline"><semantics> <mi>B</mi> </semantics></math> and <math display="inline"><semantics> <mi>C</mi> </semantics></math> are focused as points.</p>
Full article ">Figure 2
<p>Energy mapping modeling in CSAR imaging. <math display="inline"><semantics> <mrow> <mo>Δ</mo> <msub> <mi>h</mi> <mn>1</mn> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mo>Δ</mo> <msub> <mi>h</mi> <mn>2</mn> </msub> </mrow> </semantics></math> are the heights of <math display="inline"><semantics> <mi>E</mi> </semantics></math> and <math display="inline"><semantics> <mi>F</mi> </semantics></math>, respectively. The imaging result of <math display="inline"><semantics> <mi>D</mi> </semantics></math> is point <math display="inline"><semantics> <mi>d</mi> </semantics></math>, while <math display="inline"><semantics> <mi>e</mi> </semantics></math> and <math display="inline"><semantics> <mi>f</mi> </semantics></math> are defocused rings of <math display="inline"><semantics> <mi>E</mi> </semantics></math> and <math display="inline"><semantics> <mi>F</mi> </semantics></math>. <math display="inline"><semantics> <mrow> <msup> <mi>e</mi> <mo>′</mo> </msup> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msup> <mi>f</mi> <mo>′</mo> </msup> </mrow> </semantics></math> are the position of the image plane when the imaging results of <math display="inline"><semantics> <mi>E</mi> </semantics></math> and <math display="inline"><semantics> <mi>F</mi> </semantics></math> are focused as points.</p>
Full article ">Figure 3
<p>The reference heights of (<b>a</b>,<b>b</b>) are 0 m and 4.5 m, respectively. The real-measured data are collected by setting a trajectory over a road intersection with buildings and factories of different heights. There are streetlamps higher than the ground along both sides of roads. The focus quality of regions marked with red rectangles, whose zoomed-in results are shown in <a href="#remotesensing-16-02797-f004" class="html-fig">Figure 4</a>, is especially different.</p>
Full article ">Figure 4
<p>Zoomed-in results of the regions marked with red rectangles in <a href="#remotesensing-16-02797-f003" class="html-fig">Figure 3</a>a. (<b>a</b>–<b>d</b>) correspond to rectangles 1–4, respectively. The left results in the four groups of images correspond to <a href="#remotesensing-16-02797-f003" class="html-fig">Figure 3</a>a, while the others correspond to <a href="#remotesensing-16-02797-f003" class="html-fig">Figure 3</a>b.</p>
Full article ">Figure 5
<p>(<b>a</b>) The high-frequency coefficients in the vertical direction produced by one-layer NSST in eight directions of <a href="#remotesensing-16-02797-f003" class="html-fig">Figure 3</a>a. (<b>b</b>) The high-frequency coefficients of <a href="#remotesensing-16-02797-f003" class="html-fig">Figure 3</a>b.</p>
Full article ">Figure 6
<p>(<b>a</b>) An optical image with the foreground focused. (<b>c</b>) An optical image with the background focused. (<b>b</b>,<b>d</b>) are high-frequency coefficients in the vertical direction for (<b>a</b>,<b>c</b>), respectively, produced by one-layer NSST in eight directions.</p>
Full article ">Figure 7
<p>Initial decision maps of source images in <a href="#remotesensing-16-02797-f003" class="html-fig">Figure 3</a>. (<b>a</b>,<b>b</b>) correspond to that of <a href="#remotesensing-16-02797-f003" class="html-fig">Figure 3</a>.</p>
Full article ">Figure 8
<p>The real-measured data processing diagram. The detailed process of the proposed image fusion method is shown in <a href="#remotesensing-16-02797-f009" class="html-fig">Figure 9</a>.</p>
Full article ">Figure 9
<p>Schematic diagram of the proposed method consisting of multi-layer imaging and multi-focus image fusion. The box with NSML is the SML calculation of source images, while the others with MAX, GF, and IF represent focus measure, guided filter, and image fusion, respectively. On the contrary, <math display="inline"><semantics> <mrow> <mfenced close="}" open="{"> <mrow> <msub> <mi>I</mi> <mn>1</mn> </msub> <mo>,</mo> <mo>⋯</mo> <msub> <mi>I</mi> <mi>N</mi> </msub> </mrow> </mfenced> </mrow> </semantics></math> are source images of different reference heights with their SMLs that are represented as <math display="inline"><semantics> <mrow> <mfenced close="}" open="{"> <mrow> <mi>S</mi> <mi>M</mi> <msub> <mi>L</mi> <mn>1</mn> </msub> <mo>,</mo> <mo>⋯</mo> <mi>S</mi> <mi>M</mi> <msub> <mi>L</mi> <mi>N</mi> </msub> </mrow> </mfenced> </mrow> </semantics></math>. <math display="inline"><semantics> <mrow> <mi>I</mi> <mi>D</mi> <mi>M</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>F</mi> <mi>D</mi> <mi>M</mi> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mi>I</mi> <mi>F</mi> </msub> </mrow> </semantics></math> are initial decision maps, final decision maps, and the fused image, respectively.</p>
Full article ">Figure 10
<p>Schematic of multi-layer imaging, with <math display="inline"><semantics> <mi>H</mi> </semantics></math> as the range of reference heights and <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>H</mi> </mrow> </semantics></math> as the interval. The regions marked with red boxes are focused. (<b>b</b>) The CSAR imaging geometry.</p>
Full article ">Figure 11
<p>The circular trajectory collected a data set over an island whose optical image is shown in <a href="#remotesensing-16-02797-f012" class="html-fig">Figure 12</a>. <math display="inline"><semantics> <mover accent="true"> <mi>V</mi> <mo>→</mo> </mover> </semantics></math> represents the victory of the platform. The blue triangle represents the radar beam width.</p>
Full article ">Figure 12
<p>The optical image of the ROI. There are marinas and buildings of different heights. Moreover, there is a road with streetlamps to the north of the island and a square to the northeast.</p>
Full article ">Figure 13
<p>The reference heights of (<b>a</b>,<b>b</b>) are −1.6 m and 1.4 m, respectively. The focused quality of imaging results of the same objects is different in (<b>a</b>,<b>b</b>), especially in the regions marked with a blue rectangle. The zoomed-in results of the regions marked with red rectangles labeled 1–3 are shown in <a href="#remotesensing-16-02797-f014" class="html-fig">Figure 14</a>.</p>
Full article ">Figure 14
<p>Zoomed-in results of the regions marked with red rectangles in <a href="#remotesensing-16-02797-f013" class="html-fig">Figure 13</a>a. (<b>a</b>–<b>c</b>) correspond to Regions 1–3, respectively. The left images are optical images of Regions 1–3, which are a square with several objects of different heights, a marina lower than the ground, and a tower, respectively. The middle results in the three groups of images correspond to <a href="#remotesensing-16-02797-f013" class="html-fig">Figure 13</a>a, while the others correspond to (<b>b</b>).</p>
Full article ">Figure 15
<p>Fusion results of different methods. (<b>a</b>) Proposed method. (<b>b</b>) AG-based method. (<b>c</b>) NSST. (<b>d</b>) PCNN. The zoomed-in results of regions marked with rectangles are shown in <a href="#remotesensing-16-02797-f016" class="html-fig">Figure 16</a>.</p>
Full article ">Figure 16
<p>Zoomed-in results of the regions marked with red rectangles in <a href="#remotesensing-16-02797-f015" class="html-fig">Figure 15</a>a. The images from top to bottom correspond to Regions 1–3. Vertical images are a group and (<b>a</b>–<b>d</b>) correspond to that of <a href="#remotesensing-16-02797-f015" class="html-fig">Figure 15</a>.</p>
Full article ">Figure 16 Cont.
<p>Zoomed-in results of the regions marked with red rectangles in <a href="#remotesensing-16-02797-f015" class="html-fig">Figure 15</a>a. The images from top to bottom correspond to Regions 1–3. Vertical images are a group and (<b>a</b>–<b>d</b>) correspond to that of <a href="#remotesensing-16-02797-f015" class="html-fig">Figure 15</a>.</p>
Full article ">Figure 17
<p>The fused result of 24 source images of different reference heights was processed by the proposed method. The target marked with a red circle is a point-like one that is used to analyze the performance of the proposed method.</p>
Full article ">Figure 18
<p>The image slices of the marked target in multi-layer images. (<b>a</b>) The imaging reference height is 0 m. (<b>b</b>) The imaging reference height is 1.4 m. (<b>c</b>) The imaging reference height is 2.8 m. (<b>d</b>) The fusion result.</p>
Full article ">Figure 19
<p>The X and Y slices of the marked target. (<b>a</b>) The X slice. (<b>b</b>) The Y slice.</p>
Full article ">
20 pages, 11907 KiB  
Article
Precise Motion Compensation of Multi-Rotor UAV-Borne SAR Based on Improved PTA
by Yao Cheng, Xiaolan Qiu and Dadi Meng
Remote Sens. 2024, 16(14), 2678; https://doi.org/10.3390/rs16142678 - 22 Jul 2024
Viewed by 735
Abstract
In recent years, with the miniaturization of high-precision position and orientation systems (POS), precise motion errors during SAR data collection can be calculated based on high-precision POS. However, compensating for these errors remains a significant challenge for multi-rotor UAV-borne SAR systems. Compared with [...] Read more.
In recent years, with the miniaturization of high-precision position and orientation systems (POS), precise motion errors during SAR data collection can be calculated based on high-precision POS. However, compensating for these errors remains a significant challenge for multi-rotor UAV-borne SAR systems. Compared with large aircrafts, multi-rotor UAVs are lighter, slower, have more complex flight trajectories, and have larger squint angles, which result in significant differences in motion errors between building targets and ground targets. If the motion compensation is based on ground elevation, the motion error of the ground target will be fully compensated, but the building target will still have a large residual error; as a result, although the ground targets can be well-focused, the building targets may be severely defocused. Therefore, it is necessary to further compensate for the residual motion error of building targets based on the actual elevation on the SAR image. However, uncompensated errors will affect the time–frequency relationship; furthermore, the ω-k algorithm will further change these errors, resulting in errors in SAR images becoming even more complex and difficult to compensate for. To solve this problem, this paper proposes a novel improved precise topography and aperture-dependent (PTA) method that can precisely compensate for motion errors in the UAV-borne SAR system. After motion compensation and imaging processing based on ground elevation, a secondary focus is applied to defocused buildings. The improved PTA fully considers the coupling of the residual error with the time–frequency relationship and ω-k algorithm, and the precise errors in the two-dimensional frequency domain are determined through numerical calculations without any approximations. Simulation and actual data processing verify the effectiveness of the method, and the experimental results show that the proposed method in this paper is better than the traditional method. Full article
Show Figures

Figure 1

Figure 1
<p>Airborne SAR geometric structure diagram.</p>
Full article ">Figure 2
<p>The relationship between <math display="inline"> <semantics> <mrow> <msup> <mi>η</mi> <mo>*</mo> </msup> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msub> <mi>f</mi> <mi>η</mi> </msub> </mrow> </semantics> </math>: (<b>a</b>) the curve corresponding to the function <span class="html-italic">g</span>(*); (<b>b</b>) the curve corresponding to the function <span class="html-italic">G</span>(*).</p>
Full article ">Figure 3
<p>The precise calculation process for the errors of the two-dimensional frequency domain.</p>
Full article ">Figure 4
<p>The overall flowchart of the OSA + ω-k + improved PTA.</p>
Full article ">Figure 5
<p>Residual motion errors within the synthetic aperture.</p>
Full article ">Figure 6
<p>The two-dimensional -frequency domain of SAR data. (<b>a</b>) The two-dimensional frequency domain of SAR data before Stolt interpolation; (<b>b</b>) the two-dimensional frequency domain of SAR data after Stolt interpolation.</p>
Full article ">Figure 7
<p>The difference between the error model and the simulated signal. (<b>a</b>) The difference between Equation (8) and the simulated signal; (<b>b</b>) the difference between Equation (9) and the simulated signal.</p>
Full article ">Figure 8
<p>The difference between Equation (13) and the phase of simulated two-dimensional frequency domain signal.</p>
Full article ">Figure 9
<p>The results of the processing of the various algorithms. (<b>a</b>) The results after MOCO and ω-k; (<b>b</b>) the results after compensation using the traditional improved PTA based on the defocused image; (<b>c</b>) the results after compensation using the CMBP algorithm based on the defocused image; and (<b>d</b>) the results after compensation using the improved PTA proposed in this paper based on the defocused image.</p>
Full article ">Figure 10
<p>The photos of the experimental scene and the experimental equipment. (<b>a</b>) The photos of the Lin-gang business building; (<b>b</b>) the photos of the multi-rotor UAV; and (<b>c</b>) the photos of the Luneburg-lens reflectors.</p>
Full article ">Figure 11
<p>Results of imaging processing at different elevations. (<b>a</b>) Imaging results based on 2 m elevation; (<b>b</b>) imaging results based on 60 m elevation.</p>
Full article ">Figure 12
<p>The results of the processing of several algorithms. (<b>a</b>) The results after MOCO and ω-k; (<b>b</b>) the results after compensation using the traditional improved PTA based on the defocused image; (<b>c</b>) the results after compensation using the CMBP algorithm based on the defocused image; and (<b>d</b>) the results after compensation using the improved PTA proposed in this paper based on the defocused image.</p>
Full article ">Figure 13
<p>Impulse responses of processing results.</p>
Full article ">Figure 14
<p>Elevation used in post-processing and the specific processing steps. (<b>a</b>) The elevation used in post-processing; (<b>b</b>) the result of overlaying elevation data with the SAR image.</p>
Full article ">Figure 15
<p>Local image after processing using the proposed method.</p>
Full article ">Figure 16
<p>Local image after processing using several methods: (<b>a</b>) Sub-image result of ω-k. (<b>b</b>) Sub-image result of ω-k+ traditional improved PTA. (<b>c</b>) Sub-image result of ω-k + CMBP algorithm. (<b>d</b>) Sub-image result of ω-k + proposed algorithm.</p>
Full article ">
23 pages, 17222 KiB  
Article
Random Stepped Frequency ISAR 2D Joint Imaging and Autofocusing by Using 2D-AFCIFSBL
by Yiding Wang, Yuanhao Li, Jiongda Song and Guanghui Zhao
Remote Sens. 2024, 16(14), 2521; https://doi.org/10.3390/rs16142521 - 9 Jul 2024
Viewed by 621
Abstract
With the increasingly complex electromagnetic environment faced by radar, random stepped frequency (RSF) has garnered widespread attention owing to its remarkable Electronic Counter-Countermeasure (ECCM) characteristic, and it has been universally applied in inverse synthetic aperture radar (ISAR) in recent years. However, if the [...] Read more.
With the increasingly complex electromagnetic environment faced by radar, random stepped frequency (RSF) has garnered widespread attention owing to its remarkable Electronic Counter-Countermeasure (ECCM) characteristic, and it has been universally applied in inverse synthetic aperture radar (ISAR) in recent years. However, if the phase error induced by the translational motion of the target in RSF ISAR is not precisely compensated, the imaging result will be defocused. To address this challenge, a novel 2D method based on sparse Bayesian learning, denoted as 2D-autofocusing complex-value inverse-free SBL (2D-AFCIFSBL), is proposed to accomplish joint ISAR imaging and autofocusing for RSF ISAR. First of all, to integrate autofocusing into the ISAR imaging process, phase error estimation is incorporated into the imaging model. Then, we increase the speed of Bayesian inference by relaxing the evidence lower bound (ELBO) to avoid matrix inversion, and we further convert the iterative process into a matrix form to improve the computational efficiency. Finally, the 2D phase error is estimated through maximum likelihood estimation (MLE) in the image reconstruction iteration. Experimental results on both simulated and measured datasets have substantiated the effectiveness and computational efficiency of the proposed 2D joint imaging and autofocusing method. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing: 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>ISAR imaging geometry.</p>
Full article ">Figure 2
<p>Probabilistic graphical model.</p>
Full article ">Figure 3
<p>(<b>a</b>) Scattering points of the simulated model. (<b>b</b>) Complete echo data ISAR imaging results.</p>
Full article ">Figure 4
<p>(<b>a</b>) Random phase error. (<b>b</b>) Linear phase error. (<b>c</b>) Mixed phase error.</p>
Full article ">Figure 5
<p>ISAR imaging results under <math display="inline"><semantics> <mrow> <mo>(</mo> <mn>0.8</mn> <mo>,</mo> <mn>0.8</mn> <mo>)</mo> </mrow> </semantics></math> SPR with random, linear and mixed phase error.</p>
Full article ">Figure 6
<p>Image entropy curves added: (<b>a</b>) random phase error, (<b>b</b>) linear phase error, (<b>c</b>) mixed phase error with the SPR of <math display="inline"><semantics> <mrow> <mo>(</mo> <mn>0.8</mn> <mo>,</mo> <mn>0.8</mn> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>ISAR imaging results under <math display="inline"><semantics> <mrow> <mo>(</mo> <mn>0.6</mn> <mo>,</mo> <mn>0.6</mn> <mo>)</mo> </mrow> </semantics></math> SPR with random, linear and mixed phase error.</p>
Full article ">Figure 8
<p>ISAR imaging results under <math display="inline"><semantics> <mrow> <mo>(</mo> <mn>0.4</mn> <mo>,</mo> <mn>0.4</mn> <mo>)</mo> </mrow> </semantics></math> SPR with random, linear and mixed phase error.</p>
Full article ">Figure 9
<p>Image entropy curves added: (<b>a</b>) random phase error, (<b>b</b>) linear phase error, (<b>c</b>) mixed phase error with the SPR of <math display="inline"><semantics> <mrow> <mo>(</mo> <mn>0.6</mn> <mo>,</mo> <mn>0.6</mn> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 10
<p>Image entropy curves added: (<b>a</b>) random phase error, (<b>b</b>) linear phase error, (<b>c</b>) mixed phase error with the SPR of <math display="inline"><semantics> <mrow> <mo>(</mo> <mn>0.4</mn> <mo>,</mo> <mn>0.4</mn> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 11
<p>ISAR imaging results under 10 dB, 5 dB and 0 dB SNR with <math display="inline"><semantics> <mrow> <mo>(</mo> <mn>0.5</mn> <mo>,</mo> <mn>0.5</mn> <mo>)</mo> </mrow> </semantics></math> SPR.</p>
Full article ">Figure 12
<p>Image entropy curves under (<b>a</b>) 10 dB, (<b>b</b>) 5 dB, (<b>c</b>) 0 dB SNR.</p>
Full article ">Figure 13
<p>Quantitative performance comparisons on (<b>a</b>) image entropy, (<b>b</b>) computational time under different SNR.</p>
Full article ">Figure 14
<p>(<b>a</b>) Convergence curve of the estimated phase error, (<b>b</b>) curve of phase error span vurse IE, (<b>c</b>) curve of phase error span vurse NMSE, (<b>d</b>) curve of phase error span vurse computational time.</p>
Full article ">Figure 15
<p>(<b>a</b>) The real image of Yak-42 aircraft. (<b>b</b>) ISAR imaging result of Yak-42 aircraft with complete data.</p>
Full article ">Figure 16
<p>ISAR imaging results under different SPR with 20 dB SNR.</p>
Full article ">Figure 17
<p>ISAR imaging results under 20 dB, 10 dB and 0 dB SNR with <math display="inline"><semantics> <mrow> <mo>(</mo> <mn>0.7</mn> <mo>,</mo> <mn>0.7</mn> <mo>)</mo> </mrow> </semantics></math> SPR.</p>
Full article ">
14 pages, 17218 KiB  
Article
Fast Three-Dimensional Profilometry with Large Depth of Field
by Wei Zhang, Jiongguang Zhu, Yu Han, Manru Zhang and Jiangbo Li
Sensors 2024, 24(13), 4037; https://doi.org/10.3390/s24134037 - 21 Jun 2024
Viewed by 774
Abstract
By applying a high projection rate, the binary defocusing technique can dramatically increase 3D imaging speed. However, existing methods are sensitive to the varied defocusing degree, and have limited depth of field (DoF). To this end, a time–domain Gaussian fitting method is proposed [...] Read more.
By applying a high projection rate, the binary defocusing technique can dramatically increase 3D imaging speed. However, existing methods are sensitive to the varied defocusing degree, and have limited depth of field (DoF). To this end, a time–domain Gaussian fitting method is proposed in this paper. The concept of a time–domain Gaussian curve is firstly put forward, and the procedure of determining projector coordinates with a time–domain Gaussian curve is illustrated in detail. The neural network technique is applied to rapidly compute peak positions of time-domain Gaussian curves. Relying on the computing power of the neural network, the proposed method can reduce the computing time greatly. The binary defocusing technique can be combined with the neural network, and fast 3D profilometry with a large depth of field is achieved. Moreover, because the time–domain Gaussian curve is extracted from individual image pixel, it will not deform according to a complex surface, so the proposed method is also suitable for measuring a complex surface. It is demonstrated by the experiment results that our proposed method can extends the system DoF by five times, and both the data acquisition time and computing time can be reduced to less than 35 ms. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of determining the projector coordinate.</p>
Full article ">Figure 2
<p>Principle of computing peak positions of time-domain Gaussian curves with the neural network.</p>
Full article ">Figure 3
<p>Schematic diagram showing the circularly shifting of the time-domain Gaussian curves.</p>
Full article ">Figure 4
<p>Flow chart of computing peak positions with preprocessing procedures.</p>
Full article ">Figure 5
<p>The Gaussian fringes are generated by blurring the multi-line pattern (in one-dimensional space).</p>
Full article ">Figure 6
<p>Time-domain Gaussian curve is unaffected by a complex object surface. (<b>a</b>) The plaster statue with a complex surface. (<b>b</b>) A complex surface is illuminated with Gaussian fringes (12 multi-line patterns are used to generate time-domain Gaussian curves). (<b>c</b>) Intensity profile along the white line in (<b>b</b>). (<b>d</b>) Time-domain Gaussian curves extracted from image pixel.</p>
Full article ">Figure 7
<p>The influence of a complex surface on the time-domain Gaussian curves.</p>
Full article ">Figure 8
<p>Projector coordinates are calculated with time-domain Gaussian curves (four shifting steps) and the Levenberg–Marquardt algorithm. (<b>a</b>–<b>d</b>) Gaussian fringes with different shifting distance (0, 1, 2, and 3 columns in projector plane) are projected onto the plaster statue, respectively. (<b>e</b>) The 3D reconstruction results.</p>
Full article ">Figure 9
<p>Computing projector coordinates with neural network. (<b>a</b>) The input part of training data. (<b>b</b>) The output part of training data. (<b>c</b>) Computing result is achieved using circular shift. (<b>d</b>) Computing result is achieved without using circular shift. (<b>e</b>) The fluctuation of peak positions along the white lines in (<b>c</b>,<b>d</b>). (<b>f</b>–<b>h</b>) The 3D reconstruction results of the neural network model using circular shift, the neural network model without using circular shift, and the Levenberg–Marquardt algorithm, respectively (step distance being 2 column in projector plane).</p>
Full article ">Figure 10
<p>Testing the sensitivity of defocusing degree with multiple planar targets which are evenly placed from 0 mm to 750 mm. (<b>a</b>) Planar targets are illuminated with sinusoidal fringes. (<b>b</b>,<b>c</b>) Planar targets are illuminated with imitated sinusoidal fringes, which are generated using the SBM technique and dithering technique, respectively. (<b>d</b>) Planar targets are illuminated with Gaussian fringes.</p>
Full article ">Figure 11
<p>Comparison of the sensitivity to defocusing degree. (<b>a</b>–<b>d</b>) The 3D reconstruction results of sinusoidal pattern, SBM technique, dithering technique, and our proposed method. (<b>e</b>) The mean absolute errors in different depths.</p>
Full article ">
14 pages, 5228 KiB  
Article
Analytical Model of Point Spread Function under Defocused Degradation in Diffraction-Limited Systems: Confluent Hypergeometric Function
by Feijun Song, Qiao Chen, Xiongxin Tang and Fanjiang Xu
Photonics 2024, 11(5), 455; https://doi.org/10.3390/photonics11050455 - 13 May 2024
Viewed by 1210
Abstract
In recent years, optical systems near the diffraction limit have been widely used in high-end applications. Evidently, an analytical solution of the point spread function (PSF) will help to enhance both understanding and dealing with the imaging process. This paper analyzes the Fresnel [...] Read more.
In recent years, optical systems near the diffraction limit have been widely used in high-end applications. Evidently, an analytical solution of the point spread function (PSF) will help to enhance both understanding and dealing with the imaging process. This paper analyzes the Fresnel diffraction of diffraction-limited optical systems in defocused conditions. For this work, an analytical solution of the defocused PSF was obtained using the series expansion of the confluent hypergeometric functions. The analytical expression of the defocused optical transfer function is also presented herein for comparison with the PSF. Additionally, some characteristic parameters for the PSF are provided, such as the equivalent bandwidth and the Strehl ratio. Comparing the PSF obtained using the fast Fourier transform algorithm of an optical system with known, detailed parameters to the analytical solution derived in this paper using only the typical parameters, the root mean square errors of the two methods were found to be less than 3% in the weak and medium defocus range. The attractive advantages of the universal model, which is independent of design details, objective types, and applications, are discussed. Full article
(This article belongs to the Special Issue Emerging Topics in High-Power Laser and Light–Matter Interactions)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the defocus system.</p>
Full article ">Figure 2
<p>The PSF distribution under different degrees of defocusing.</p>
Full article ">Figure 3
<p>The sampling values of the PSF curve vs. the term’s truncation number N of the series equation, Equation (19). All terms from <span class="html-italic">n</span> = 0 to <span class="html-italic">n</span> = N are kept, but the terms with <span class="html-italic">n</span> &gt; N are omitted.</p>
Full article ">Figure 4
<p>OTF curves for different defocus states.</p>
Full article ">Figure 5
<p>The characteristic parameter (<b>a</b>) Equivalent bandwidth and (<b>b</b>) equivalent linewidth.</p>
Full article ">Figure 6
<p>Schematic of the two-dimensional structure of the equivalent bandwidth.</p>
Full article ">Figure 7
<p>Diagram of the SR as a function of the defocus amount.</p>
Full article ">Figure 8
<p>OTF versus frequency.</p>
Full article ">Figure 9
<p>The curve of the relationship between the SR and resolution at OTF = 0.2 for various defocus positions. (<b>a</b>) SR vs. resolution; (<b>b</b>) ln(SR) vs. resolution.</p>
Full article ">Figure 10
<p>The 2D system layouts: (<b>a</b>) system 1; (<b>b</b>) system 2; (<b>c</b>) system 3.</p>
Full article ">Figure 11
<p>Comparison of the analytical solution and ray tracing results for system 1 at different defocus levels. (<b>a</b>) MTF; (<b>b</b>) PSF.</p>
Full article ">Figure 12
<p>Comparison of the analytical solution and ray tracing results for system 2 at different defocus levels. (<b>a</b>) MTF; (<b>b</b>) PSF.</p>
Full article ">Figure 13
<p>Comparison of the analytical solution and ray tracing results for system 3 at different defocus levels. (<b>a</b>) MTF; (<b>b</b>) PSF.</p>
Full article ">
21 pages, 5940 KiB  
Article
Sub-Nyquist SAR Imaging and Error Correction Via an Optimization-Based Algorithm
by Wenjiao Chen, Li Zhang, Xiaocen Xing, Xin Wen and Qiuxuan Zhang
Sensors 2024, 24(9), 2840; https://doi.org/10.3390/s24092840 - 29 Apr 2024
Viewed by 779
Abstract
Sub-Nyquist synthetic aperture radar (SAR) based on pseudo-random time–space modulation has been proposed to increase the swath width while preserving the azimuthal resolution. Due to the sub-Nyquist sampling, the scene can be recovered by an optimization-based algorithm. However, these methods suffer from some [...] Read more.
Sub-Nyquist synthetic aperture radar (SAR) based on pseudo-random time–space modulation has been proposed to increase the swath width while preserving the azimuthal resolution. Due to the sub-Nyquist sampling, the scene can be recovered by an optimization-based algorithm. However, these methods suffer from some issues, e.g., manually tuning difficulty and the pre-definition of optimization parameters, and a low signal–noise ratio (SNR) resistance. To address these issues, a reweighted optimization algorithm, named pseudo-ℒ0-norm optimization algorithm, is proposed for the sub-Nyquist SAR system in this paper. A modified regularization model is first built by applying the scene prior information to nearly acquire the number of nonzero elements based on Bayesian estimation, and then this model is solved by the Cauchy–Newton method. Additionally, an error correction method combined with our proposed pseudo-ℒ0-norm optimization algorithm is also present to eliminate defocusing in the motion-induced model. Finally, experiments with simulated signals and strip-map TerraSAR-X images are carried out to demonstrate the effectiveness and superiority of our proposed algorithm. Full article
(This article belongs to the Special Issue Sensing and Signal Analysis in Synthetic Aperture Radar Systems)
Show Figures

Figure 1

Figure 1
<p>The imaging geometry of the sub-Nyquist SAR. <span class="html-fig-inline" id="sensors-24-02840-i001"><img alt="Sensors 24 02840 i001" src="/sensors/sensors-24-02840/article_deploy/html/images/sensors-24-02840-i001.png"/></span> denotes the Nyquist samples. <span class="html-fig-inline" id="sensors-24-02840-i002"><img alt="Sensors 24 02840 i002" src="/sensors/sensors-24-02840/article_deploy/html/images/sensors-24-02840-i002.png"/></span> demonstrates the real azimuthal samples chosen uniformly from Nyquist samples in the sub-Nyquist SAR system based on the pseudo-random time–space modulation.</p>
Full article ">Figure 2
<p>The imaging geometry with the position error. The solid line denotes the realistic track, and the dashed line denotes the hypothetical track. <math display="inline"><semantics> <mrow> <mi>R</mi> <mfenced> <mrow> <mi>η</mi> <mo>,</mo> <msub> <mi>ζ</mi> <mi>R</mi> </msub> </mrow> </mfenced> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mi>E</mi> </msub> <mfenced> <mrow> <mi>η</mi> <mo>,</mo> <msub> <mi>ζ</mi> <mi>R</mi> </msub> </mrow> </mfenced> </mrow> </semantics></math> are the hypothetical slant range and real slant range with the position error, respectively. <math display="inline"><semantics> <mrow> <msub> <mi>ζ</mi> <mi>R</mi> </msub> </mrow> </semantics></math> is the pitch angle. <b><span class="html-italic">P</span></b> is the target point. <math display="inline"><semantics> <mi>H</mi> </semantics></math> is the orbital height. Figure (<b>b</b>) is the projection of target point <b><span class="html-italic">P</span></b> on the XOZ plane. <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>x</mi> <mfenced> <mi>η</mi> </mfenced> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>z</mi> <mfenced> <mi>η</mi> </mfenced> </mrow> </semantics></math> are the range between the realistic position and the hypothetical position on the x-axis and z-axis, respectively.</p>
Full article ">Figure 3
<p>Sea–land interface scene in the SAR image.</p>
Full article ">Figure 4
<p>Sea containing several boats in the SAR image.</p>
Full article ">Figure 5
<p><span class="html-italic">NMSE</span> vs. the iterative number under different algorithms.</p>
Full article ">Figure 6
<p>The recovered result under different algorithms.</p>
Full article ">Figure 7
<p>The recovered result based on the pseudo-<math display="inline"><mi>ℒ</mi></math><sub>0</sub>-norm optimization algorithm and <math display="inline"><mi>ℒ</mi></math><sub>1</sub>-norm optimization algorithm.</p>
Full article ">Figure 7 Cont.
<p>The recovered result based on the pseudo-<math display="inline"><mi>ℒ</mi></math><sub>0</sub>-norm optimization algorithm and <math display="inline"><mi>ℒ</mi></math><sub>1</sub>-norm optimization algorithm.</p>
Full article ">Figure 8
<p>The reconstructed results. (<b>a</b>) The original image; (<b>b</b>) the reconstructed scene without error correction; (<b>c</b>) the reconstructed scene with error correction based on the <math display="inline"><mi>ℒ</mi></math><sub>1</sub>-norm optimization algorithm; and (<b>d</b>) the reconstructed scene with error correction based on the pseudo-<math display="inline"><mi>ℒ</mi></math><sub>0</sub>-norm optimization algorithm.</p>
Full article ">Figure 9
<p>The reconstructed result. (<b>a</b>) The scene reconstruction without error correction; (<b>b</b>) the scene reconstruction with error correction based on the <math display="inline"><mi>ℒ</mi></math><sub>1</sub>-norm optimization algorithm; and (<b>c</b>) the scene reconstruction with error correction based on the pseudo-<math display="inline"><mi>ℒ</mi></math><sub>0</sub>-norm optimization algorithm.</p>
Full article ">
22 pages, 22814 KiB  
Article
Maritime Moving Target Reconstruction via MBLCFD in Staggered SAR System
by Xin Qi, Yun Zhang, Yicheng Jiang, Zitao Liu, Xinyue Ma and Xuan Liu
Remote Sens. 2024, 16(9), 1550; https://doi.org/10.3390/rs16091550 - 26 Apr 2024
Viewed by 762
Abstract
Imaging maritime targets requires a high resolution and wide swath (HWRS) in a synthetic aperture radar (SAR). When operated with a variable pulse repetition interval (PRI), a staggered SAR can realize HRWS imaging, which needs to be reconstructed due to echo pulse loss [...] Read more.
Imaging maritime targets requires a high resolution and wide swath (HWRS) in a synthetic aperture radar (SAR). When operated with a variable pulse repetition interval (PRI), a staggered SAR can realize HRWS imaging, which needs to be reconstructed due to echo pulse loss and a nonuniformly sampled signal along the azimuth. The existing reconstruction algorithms are designed for stationary scenes in a staggered SAR mode, and thus, produce evident image defocusing caused by complex target motion for moving targets. Typically, the nonuniform sampling and complex motion of maritime targets aggravate the spectrum aliasing in a staggered SAR mode, causing inevitable ambiguity and degradation in its reconstruction performance. To this end, this study analyzed the spectrum of maritime targets in a staggered SAR system through theoretical derivation. After this, a reconstruction method named MBLCFD (Modified Best Linear Unbaised and Complex-Lag Time-Frequency Distribution) is proposed to refocus the blurred maritime target. First, the signal model of the maritime target with 3D rotation accompanying roll–pitch–yaw movement was established under the curved orbit of the satellite. The best linear unbiased (BLU) method was modified to alleviate the coupling of nonuniform sampling and target motion. A precise SAR algorithm was performed based on the method of inverse reversion to counteract the effect of a curved orbit and wide swath. Based on the hybrid SAR/ISAR technique, the complex-lag time-frequency distribution was exploited to refocus the maritime target images. Simulations and experiments were carried out to verify the effectiveness of the proposed method, providing precise refocusing performance in staggered mode. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Geometry of staggered SAR system for maritime moving target.</p>
Full article ">Figure 2
<p>Spectrum analysis in azimuth for staggered mode. Top: stationary target. Bottom: moving target.</p>
Full article ">Figure 3
<p>Flowchart of the proposed MBLCFD reconstruction algorithm for maritime moving targets.</p>
Full article ">Figure 4
<p>The PRI sequence of the fast linear variation strategy. (<b>a</b>) The PRI trend. (<b>b</b>) The location of the blind ranges. (<b>c</b>) The percentage of the lost pulses.</p>
Full article ">Figure 5
<p>Stop-and-go approximation model phase error versus azimuth time.</p>
Full article ">Figure 6
<p>Simulated target 1. (<b>a</b>) Defocused staggered SAR image using original BLU method. (<b>b</b>) Defocused staggered SAR image using modified BLU method.</p>
Full article ">Figure 7
<p>Simulated target 2. (<b>a</b>) Defocused staggered SAR image using original BLU method. (<b>b</b>) Defocused staggered SAR image using modified BLU method.</p>
Full article ">Figure 8
<p>Simulated target 1. (<b>a</b>) Zoomed-in SAR defocused image, and the refocused images (<b>b</b>) using the ICBA method, (<b>c</b>) using SPWVD method, and (<b>d</b>) using proposed MBLCFD method.</p>
Full article ">Figure 9
<p>Simulated target 2. (<b>a</b>) Zoomed-in defocused SAR image, and the refocused images (<b>b</b>) using ICBA method, (<b>c</b>) using SPWVD method, and (<b>d</b>) using proposed MBLCFD method.</p>
Full article ">Figure 10
<p>Simulated target under low sea state. (<b>a</b>) Zoomed-in defocused SAR image, and refocused images (<b>b</b>) using RD algorithm, (<b>c</b>) using ICBA, and (<b>d</b>) using proposed MBLCFD method.</p>
Full article ">Figure 11
<p>Refocused staggered SAR image. (<b>a</b>) Operated with a relatively small rectangular window in sub-image selection. (<b>b</b>) Operated with a relatively large rectangular window in sub-image selection.</p>
Full article ">Figure 12
<p>The entropy versus the SCNR.</p>
Full article ">Figure 13
<p>Imaging results of large scene.</p>
Full article ">Figure 14
<p>Actual target 1 (AT1). (<b>a</b>) Results of the defocused SAR image, and the refocused images using (<b>b</b>) the ICBA method, (<b>c</b>) the SPWVD method, and (<b>d</b>) the proposed method.</p>
Full article ">Figure 15
<p>Actual target 2 (AT1). (<b>a</b>) Results of the defocused SAR image, and the refocused images using (<b>b</b>) the ICBA method, (<b>c</b>) the SPWVD method, and (<b>d</b>) the proposed method.</p>
Full article ">
21 pages, 1769 KiB  
Article
Adaptive Resource Scheduling Algorithm for Multi-Target ISAR Imaging in Radar Systems
by Huan Yao, Hao Lou, Dan Wang, Yijun Chen and Ying Luo
Remote Sens. 2024, 16(9), 1496; https://doi.org/10.3390/rs16091496 - 24 Apr 2024
Viewed by 989
Abstract
Inverse synthetic-aperture radar (ISAR) can achieve precise imaging of targets, which enables precise perception of battlefield information, and it has become one of the most important tasks for radar systems. In multi-target scenarios, a resource scheduling method is required to improve the sensing [...] Read more.
Inverse synthetic-aperture radar (ISAR) can achieve precise imaging of targets, which enables precise perception of battlefield information, and it has become one of the most important tasks for radar systems. In multi-target scenarios, a resource scheduling method is required to improve the sensing ability and the overall efficiency of a radar system due to the limited resources. Considering the motion state of the target will change as the observation distance increases and image defocusing can occur due to the prolonged coherence accumulation time and significant changes in the target’s motion state, the optimal observation period should be an important consideration factor in the resource scheduling method to further improve the imaging efficiency of radar system, which has not yet been involved in existing research. In this paper, we first derive the expressions of the target’s effective rotation angle and the equivalent rotation angular velocity and then define the target’s optimal observation period. Then, for multi-target imaging scenarios, we allocate pulse resources within a given time period based on sparse-aperture ISAR imaging technology. An adaptive radar resource scheduling algorithm for multi-target ISAR imaging is proposed, which prioritizes allocating resources based on the optimal observation periods for the targets. In the algorithm, a radar resource scheduling model for multi-target ISAR imaging is established, and a feedback-based closed-loop search optimization method is proposed to solve the model. Finally, the best scheduling strategy can be obtained, which includes imaging task duration and the pulse allocation sequence for each target. Simulation results validate the effectiveness of the algorithm. Full article
(This article belongs to the Special Issue Target Detection, Tracking and Imaging Based on Radar)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Derivation diagram of the target’s equivalent rotation angle. (<b>a</b>) Target flying in a straight line. (<b>b</b>) Target flying along a curved path.</p>
Full article ">Figure 2
<p>Feedback structure.</p>
Full article ">Figure 3
<p>Scattering point models. (<b>a</b>) Target 4. (<b>b</b>) Target 8.</p>
Full article ">Figure 4
<p>Angular velocity of 20 target over time.</p>
Full article ">Figure 5
<p>Indicators varying with iteration number. (<b>a</b>) Three performance indicators varying with iteration number. (<b>b</b>) Degree of Excellence (DoE) varying with iteration number. (<b>c</b>) Imaging task duration varying with iteration number.</p>
Full article ">Figure 6
<p>Pulse allocation sequences for 20 targets when the imaging task duration is set to 11 s. (The blue line indicates that the pulse at the current position is assigned to observe the corresponding target). (<b>a</b>) The first 10 targets. (<b>b</b>) The last 10 targets.</p>
Full article ">Figure 7
<p>Imaging results. (<b>a</b>) Target 4. (<b>b</b>) Target 8. (<b>c</b>) Target 9. (<b>d</b>) Target 17.</p>
Full article ">Figure 8
<p>The scheduling and imaging results of the three algorithms. (The red lines represent the pulses assigned to Target 8 for observation, and they are sparsely distributed.) (<b>a</b>) The scheduling result of the proposed algorithm for Target 8. (<b>b</b>) The imaging result of the proposed algorithm for Target 8. (<b>c</b>) The scheduling result of Algorithm 2 for Target 8. (<b>d</b>) The imaging result of Algorithm 2 for Target 8. (<b>e</b>) The scheduling result of Algorithm 3 for Target 8. (<b>f</b>) The imaging result of Algorithm 3 for Target 8.</p>
Full article ">Figure 9
<p>Imaging task duration varying with the number of targets for the proposed algorithm.</p>
Full article ">Figure 10
<p>Comparison of performance indicators between different algorithms. (<b>a</b>) PIR varying with the number of targets. (<b>b</b>) SRTS varying with the number of targets. (<b>c</b>) PUR varying with the number of targets.</p>
Full article ">
16 pages, 7278 KiB  
Article
Migration through Resolution Cell Correction and Sparse Aperture ISAR Imaging for Maneuvering Target Based on Whale Optimization Algorithm—Fast Iterative Shrinkage Thresholding Algorithm
by Xinrong Guo, Fengkai Liu and Darong Huang
Sensors 2024, 24(7), 2148; https://doi.org/10.3390/s24072148 - 27 Mar 2024
Cited by 1 | Viewed by 851
Abstract
Targets faced by inverse synthetic aperture radar (ISAR) are often non-cooperative, with target maneuvering being the main manifestation of this non-cooperation. Maneuvers cause ISAR imaging results to be severely defocused, which can create huge difficulties in target identification. In addition, as the ISAR [...] Read more.
Targets faced by inverse synthetic aperture radar (ISAR) are often non-cooperative, with target maneuvering being the main manifestation of this non-cooperation. Maneuvers cause ISAR imaging results to be severely defocused, which can create huge difficulties in target identification. In addition, as the ISAR bandwidth continues to increase, the impact of migration through resolution cells (MTRC) on imaging results becomes more significant. Target non-cooperation may also result in sparse aperture, leading to the failure of traditional ISAR imaging algorithms. Therefore, this paper proposes an algorithm to realize MTRC correction and sparse aperture ISAR imaging for maneuvering targets simultaneously named whale optimization algorithm–fast iterative shrinkage thresholding algorithm (WOA-FISTA). In this algorithm, FISTA is used to perform MTRC correction and sparse aperture ISAR imaging efficiently and WOA is adopted to estimate the rotational parameter to eliminate the effects of maneuvering on imaging results. Experimental results based on simulation and measured datasets prove that the proposed algorithm implements sparse aperture ISAR imaging and MTRC correction for maneuvering targets simultaneously. The proposed algorithm achieves better results than traditional algorithms under different signal-to-noise ratio conditions. Full article
(This article belongs to the Special Issue Signal Processing in Radar Systems)
Show Figures

Figure 1

Figure 1
<p>The turntable model of ISAR imaging.</p>
Full article ">Figure 2
<p>The flowchart of WOA-FISTA.</p>
Full article ">Figure 3
<p>The shape of the simulated aircraft.</p>
Full article ">Figure 4
<p>The imaging results of different methods under SNR = 10 dB.</p>
Full article ">Figure 5
<p>The imaging results of different methods under SNR = −5 dB.</p>
Full article ">Figure 6
<p>The imaging results of different methods under pulse sampling rate is 50%.</p>
Full article ">Figure 7
<p>The imaging results of different methods for the measured dataset.</p>
Full article ">Figure 7 Cont.
<p>The imaging results of different methods for the measured dataset.</p>
Full article ">Figure 8
<p>The imaging results of different methods for low SNR in the measured dataset.</p>
Full article ">Figure 9
<p>The imaging results of different methods for the measured dataset under pulse sampling rate of 75%.</p>
Full article ">Figure 10
<p>The imaging results of different methods for the measured dataset under pulse sampling rate of 50%.</p>
Full article ">Figure 11
<p>The imaging results of different methods for the measured dataset under pulse sampling rate of 25%.</p>
Full article ">Figure 11 Cont.
<p>The imaging results of different methods for the measured dataset under pulse sampling rate of 25%.</p>
Full article ">
23 pages, 44301 KiB  
Article
Synthetic Aperture Ladar Motion Compensation Method Based on Symmetric Triangle Linear Frequency Modulation Continuous Wave Segmented Interference
by Ruihua Shi, Wei Li, Qinghai Dong, Bingnan Wang, Maosheng Xiang and Yinshen Wang
Remote Sens. 2024, 16(5), 793; https://doi.org/10.3390/rs16050793 - 24 Feb 2024
Cited by 1 | Viewed by 923
Abstract
Synthetic Aperture Ladar (SAL) is a sensor that combines laser detection technology with synthetic aperture technology to achieve ultra-high-resolution imaging. Due to its extremely short wavelength, SAL is more sensitive to motion errors. The micrometer-level motion will affect the target’s azimuth focus. This [...] Read more.
Synthetic Aperture Ladar (SAL) is a sensor that combines laser detection technology with synthetic aperture technology to achieve ultra-high-resolution imaging. Due to its extremely short wavelength, SAL is more sensitive to motion errors. The micrometer-level motion will affect the target’s azimuth focus. This article proposes an SAL motion compensation method based on Symmetric Triangular Linear Frequency Modulation Continuous Wave (STLFMCW) segmented interference, utilizing the characteristics of a triangular wave, to solve the problem of target azimuth defocusing. This article first establishes an STLFMCW echo signal model based on the SAL system under the influence of motion errors. Secondly, the radial velocity gradient along the azimuth direction is extracted using the triangular-wave-positive and -negative frequency modulation signals segmented interference method. Then, for the initial phase wrapping problem, the frequency spectral cross-correlation method is used to accurately estimate the initial radial velocity error. The radial velocity gradient is integrated along the azimuth to obtain the platform motion trajectory. Finally, the compensation functions are constructed to complete the echo Range Cell Migration (RCM) correction and residual phase compensation, resulting in a focused SAL image. This article verifies the practical effect of this method in eliminating motion errors using only one-period STLFMCW signal through simulation and real experiments. The quantitative results show that compared with the traditional method, the proposed method reduces the azimuth Peak Sidelobe Ratio (PSLR) by 8dB and the Integrated Sidelobe Ratio (ISLR) by 9 dB. This method has significant improvements and is of great significance for high-resolution FMCW SAL imaging. Full article
(This article belongs to the Special Issue Advances in Synthetic Aperture Radar Data Processing and Application)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The basic composition structure of FMCW SAL system. ADC: Analog-to-Digital Converter.</p>
Full article ">Figure 2
<p>The instantaneous frequency of STLFMCW and dechirp signal.</p>
Full article ">Figure 3
<p>Process flow of the proposed motion compensation method.</p>
Full article ">Figure 4
<p>Simulation scene.</p>
Full article ">Figure 5
<p>Image processing results without motion compensation. (<b>a</b>) Range-compressed image. (<b>b</b>) SAL image.</p>
Full article ">Figure 6
<p>Comparison between the theoretical and estimated results. (<b>a</b>) Velocity comparison. (<b>b</b>) Motion error comparison. (<b>c</b>) Estimation errors in meters.</p>
Full article ">Figure 7
<p>Image processing results after motion compensation. (<b>a</b>) Range-compressed image. (<b>b</b>) SAL image.</p>
Full article ">Figure 8
<p>Contour map of the point targets. (<b>a</b>) No motion compensation. (<b>b</b>) Compensation with proposed method.</p>
Full article ">Figure 9
<p>Impulse spreading response. (<b>a</b>) Range profile and (<b>b</b>) Azimuth profile of the focused image.</p>
Full article ">Figure 10
<p>Simulation scene.</p>
Full article ">Figure 11
<p>Range pulse compression. (<b>a</b>) Original. (<b>b</b>) Cross-correlation. (<b>c</b>) Proposed method.</p>
Full article ">Figure 12
<p>Imaging results based on the (<b>a</b>) no motion compensation method, (<b>b</b>) cross-correlation method, and (<b>c</b>) the proposed method.</p>
Full article ">Figure 13
<p>ISAL observation geometry model.</p>
Full article ">Figure 14
<p>The platform of the ISAL system. (<b>a</b>) Photograph of ISAL system. (<b>b</b>) Imaging scene. (<b>c</b>) Target optical photo.</p>
Full article ">Figure 15
<p>Extraction of positive and negative frequency modulation signals of the echo. (<b>a</b>) STFT of original echo. (<b>b</b>) STFT after circular shift. (<b>c</b>) Positive frequency modulation. (<b>d</b>) Negative frequency modulation signal.</p>
Full article ">Figure 16
<p>The range-compressed images with different correction methods. (<b>a</b>) Original. (<b>b</b>) Cross-correlation. (<b>c</b>) Proposed method.</p>
Full article ">Figure 17
<p>Estimated results comparison. (<b>a</b>) Estimated radial velocity errors. (<b>b</b>) Estimated motion errors.</p>
Full article ">Figure 18
<p>Imaging results. (<b>a</b>) Original. (<b>b</b>) Cross-correlation. (<b>c</b>) Proposed method.</p>
Full article ">Figure 19
<p>Comparison of the pulse-response curves in the azimuth (<b>a</b>) and range directions (<b>b</b>).</p>
Full article ">
14 pages, 3686 KiB  
Article
Fabrication of Micro/Nano Dual Needle Structures with Morphological Gradient Based on Two-Photon Polymerization Laser Direct Writing with Proactive Focus Compensation
by Chenxi Xu, Chen Zhang, Wei Zhao, Yining Liu, Ziyu Li, Zeyu Wang, Baole Lu, Kaige Wang and Jintao Bai
Photonics 2024, 11(2), 187; https://doi.org/10.3390/photonics11020187 - 18 Feb 2024
Viewed by 1308
Abstract
Micro/nano structures with morphological gradients possess unique physical properties and significant applications in various research domains. This study proposes a straightforward and precise method for fabricating micro/nano structures with morphological gradients utilizing single-voxel synchronous control and a nano-piezoelectric translation stage in a two-photon [...] Read more.
Micro/nano structures with morphological gradients possess unique physical properties and significant applications in various research domains. This study proposes a straightforward and precise method for fabricating micro/nano structures with morphological gradients utilizing single-voxel synchronous control and a nano-piezoelectric translation stage in a two-photon laser direct writing technique. To address the defocusing issue in large-scale fabrication, a methodology for laser focus dynamic proactive compensation was developed based on fluorescence image analysis, which can achieve high-precision compensation of laser focus within the entire range of the nano-piezoelectric translation stage. Subsequently, the fabrication of micro/nano dual needle structures with morphological gradients were implemented by employing different writing speeds and voxel positions. The minimum height of the tip in the dual needle structure is 80 nm, with a linewidth of 171 nm, and a dual needle total length reaching 200 μm. Based on SEM (scanning electron microscope) and AFM (atomic force microscope) characterization, the dual needle structures fabricated by the method proposed in this study exhibit high symmetry and nanoscale gradient accuracy. Additionally, the fabrication of hexagonal lattice periodic structures assembled from morphological gradient needle structures and the size gradient Archimedean spiral structures validate the capability of the single voxel-based fabrication and proactive focus compensation method for complex gradient structure fabrication. Full article
Show Figures

Figure 1

Figure 1
<p>Fabrication method for morphological gradient micro/nano structure variants. (<b>a</b>) Schematic of TPP-LDW experimental setup. (<b>b1</b>–<b>b4</b>) Illustration of the MNDNS fabrication process.</p>
Full article ">Figure 2
<p>Influence of the angle between the objective focal plane and the substrate plane on the fabrication of MNDNS. (<b>a</b>) SEM characterization of the fabricated results. (<b>b</b>) Schematic diagram of the angle impact on MNDNS.</p>
Full article ">Figure 3
<p>Procedure of FIAFPC method. (<b>a</b>) Fluorescence image processing workflow in FIAFPC and corresponding voxel morphology. (<b>b</b>) Schematic of fluorescence image acquisition points and the spatial distribution of objective focal plane (focal plane B) and substrate (substrate A). (<b>c</b>) Principle of proactive compensation of focus using spatial projection method and rotation matrix method.</p>
Full article ">Figure 4
<p>Processing results of periodic linear structures over a 200 μm × 200 μm area based on FIAFPC. (<b>a</b>) Overall SEM image of the fabricated structures (the red dotted lines show the locations of cross-sectional sampling for AFM). (<b>b1</b>–<b>b4</b>): AFM images of the linear structure morphology around specific marks. (<b>c1</b>–<b>c4</b>): Cross-sectional sampling of the linear structures in (<b>b1</b>–<b>b4</b>). FIAFPC: fluorescent image analysis-based proactive compensation method, AFM: atomic force microscope, SH: structure height.</p>
Full article ">Figure 5
<p>Results of large-scale micro/nano structure fabricating based on FIAFPC. (<b>a</b>) Results of large-scale fabricating. (<b>b</b>) Results of NPS position gradient fabricating (lifting step size is 0.077 μm). (<b>c</b>) Results of NPS velocity gradient processing (velocity change step size is 2.5 μm/s). (<b>d</b>) Influence of voxel position and working speed on fabricating linewidth (WS: working speed).</p>
Full article ">Figure 6
<p>Analysis of the fabrication results of micro/nano dual needle structures with morphological gradient based on (<b>a</b>) different fabrication speeds (baseline speed 5–25 μm) and (<b>b</b>) different NPS heights (<span class="html-italic">z</span>-axis change step 0.016 μm) with a power of P = 7.78 mW at the pupil (lifting height of foundation is 0.7 μm). (<b>c</b>) variations in linewidth at the tips of MNDNS with different fabrication speeds and voxel positions. (<b>d1</b>) SEM image of the smallest MNDNS. (<b>d2</b>,<b>d3</b>) Structural morphology at the two ends of the dual needle. (<b>e1</b>) AFM image of the dual needle structure (detailed view of the needle tip inside the dashed box). (<b>e2</b>) AFM contour analysis of the needle tip structure. WS: working speed; LW: linewidth; TTLW: taper tip linewidth; VC: vertical coordinates; SH: structure height; HC: horizontal coordinates; L-S: longitudinal section; LF: linear fitting; C-S: cross-section; CF: cubic fit.</p>
Full article ">Figure 7
<p>Fabrication of controllable gradient structures based on the FIAFPC method. (<b>a</b>) Periodic lattice structure composed of scale gradient micro/nano-needle arrays (P = 6.03 mW, V = 20 μm/s, the relative height change at each feature point from the starting point is 0.75 μm). (<b>b</b>) Local zoom-in image of Figure (<b>a</b>). (<b>c</b>) Fabrication of Archimedean spiral structure with height and linewidth gradient (P = 4.52 mW, combined speed of V = 10 μm/s, overall height change is 0.75 μm). (<b>d</b>) The zoomed-in area of black dashed box of Figure (<b>c</b>). (<b>e</b>) Detailed view of the starting point of the Archimedean spiral structure.</p>
Full article ">
Back to TopTop