Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,485)

Search Parameters:
Keywords = multi-beam

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 8166 KiB  
Article
Experimental Research on the Correction of Vortex Light Wavefront Distortion
by Yahang Ge and Xizheng Ke
Photonics 2024, 11(12), 1116; https://doi.org/10.3390/photonics11121116 - 25 Nov 2024
Abstract
Wavefront distortion occurs when vortex beams are transmitted in the atmosphere. The turbulence effect greatly affects the transmission of information, so it is necessary to use adaptive optical correction technology to correct the wavefront distortion of the vortex beam at the receiving end. [...] Read more.
Wavefront distortion occurs when vortex beams are transmitted in the atmosphere. The turbulence effect greatly affects the transmission of information, so it is necessary to use adaptive optical correction technology to correct the wavefront distortion of the vortex beam at the receiving end. In this paper, a method of vortex wavefront distortion correction based on the deep deterministic policy gradient algorithm is proposed; this is a new correction method that can effectively handle high-dimensional state and action spaces and is especially suitable for correction problems in continuous action spaces. The entire system uses adaptive wavefront correction technology without a wavefront sensor. The simulation results show that the deep deterministic policy gradient algorithm can effectively correct the distorted vortex beams and improve the mode purity, and the intensity correlation coefficient of single-mode vortex light can be increased to about 0.88 and 0.69, respectively, under weak turbulence and strong turbulence, and the intensity coefficient of weak-turbulence multi-mode vortex light can be increased to about 0.96. The experimental results also show that the adaptive correction technology based on the deep deterministic policy gradient algorithm can effectively correct the wavefront distortion of vortex light. Full article
(This article belongs to the Section Optical Communication and Network)
Show Figures

Figure 1

Figure 1
<p>Adaptive optics system correction schematic diagram.</p>
Full article ">Figure 2
<p>Schematic diagram of the deformation mirror correction principle.</p>
Full article ">Figure 3
<p>DDPG algorithm flowchart framework diagram.</p>
Full article ">Figure 4
<p>DDPG algorithm calibration schematic.</p>
Full article ">Figure 5
<p>DDPG algorithm correction vortex-beam-specific flowchart.</p>
Full article ">Figure 6
<p>The correlation coefficient of single-mode light intensity changes with the number of iterations under weak turbulence.</p>
Full article ">Figure 7
<p>The spot of the single-mode LG beam before and after correction with weak distortion.</p>
Full article ">Figure 8
<p>Spiral spectrum distribution diagram of a single-mode LG beam before and after correction under weak distortion.</p>
Full article ">Figure 9
<p>The correlation coefficient of single-mode light intensity changes with the number of iterations under strong turbulence.</p>
Full article ">Figure 10
<p>The spot of the single-mode LG beam before and after correction with strong distortion.</p>
Full article ">Figure 11
<p>Spiral spectrum distribution diagram of a single-mode LG beam before and after correction under strong distortion.</p>
Full article ">Figure 12
<p>The correlation coefficient of multi-mode light intensity changes with the number of iterations under weak turbulence.</p>
Full article ">Figure 13
<p>The spot diagrams of the multi-mode LG beam before and after correction under weak astigmatism.</p>
Full article ">Figure 14
<p>Spiral spectrum distribution diagram of a multi-mode LG beam before and after correction under weak distortion.</p>
Full article ">Figure 15
<p>Experimental set up.</p>
Full article ">Figure 16
<p>The physical map of deformation mirror.</p>
Full article ">Figure 17
<p>Distribution of light intensity in various order vortex beams before and after correction. (<b>a1</b>–<b>a3</b>) Before turbulence; (<b>b1</b>–<b>b3</b>) after turbulence; (<b>c1</b>–<b>c3</b>) after correction.</p>
Full article ">Figure 18
<p>Correlation coefficient of light intensity with the number of iterations.</p>
Full article ">Figure 19
<p>The spiral spectral distribution maps of vortex beams of different orders before and after correction. (<b>a1</b>–<b>a3</b>) The spiral spectral distribution maps of single-mode vortex beams with topological charge <span class="html-italic">l</span> = 1 before and after correction in weak turbulence. (<b>b1</b>–<b>b3</b>) The spiral spectral distribution maps of single-mode vortex beams with topological charge <span class="html-italic">l</span> = 1 before and after correction in strong turbulence. (<b>c1</b>–<b>c3</b>) The spiral spectral distribution maps of multi-mode vortex beams with topological charges <span class="html-italic">l</span> = 1,−2 before and after correction in weak turbulence.</p>
Full article ">
23 pages, 6340 KiB  
Review
A Review of Lidar Technology in China’s Lunar Exploration Program
by Genghua Huang and Weiming Xu
Remote Sens. 2024, 16(23), 4354; https://doi.org/10.3390/rs16234354 - 22 Nov 2024
Viewed by 330
Abstract
Lidar technology plays a pivotal role in lunar exploration, particularly in terrain mapping, 3D topographic surveying, and velocity measurement, which are crucial for guidance, navigation, and control. This paper reviews the current global research and applications of lidar technology in lunar missions, noting [...] Read more.
Lidar technology plays a pivotal role in lunar exploration, particularly in terrain mapping, 3D topographic surveying, and velocity measurement, which are crucial for guidance, navigation, and control. This paper reviews the current global research and applications of lidar technology in lunar missions, noting that existing efforts are primarily focused on 3D terrain mapping and velocity measurement. The paper also discusses the detailed system design and key results of the laser altimeter, laser ranging sensor, laser 3D imaging sensor, and laser velocity sensor used in the Chang’E lunar missions. By comparing and analyzing similar foreign technologies, this paper identifies future development directions for lunar laser payloads. The evolution towards multi-beam single-photon detection technology aims to enhance the point cloud density and detection efficiency. This manuscript advocates that China actively advance new technologies and conduct space application research in areas such as multi-beam single-photon 3D terrain mapping, lunar surface water ice measurement, and material composition analysis, to elevate the use of laser pay-loads in lunar and space exploration. Full article
(This article belongs to the Special Issue Laser and Optical Remote Sensing for Planetary Exploration)
Show Figures

Figure 1

Figure 1
<p>The photo and diagram of the laser altimeter [<a href="#B5-remotesensing-16-04354" class="html-bibr">5</a>].</p>
Full article ">Figure 2
<p>The diagram of target return signal simulator [<a href="#B6-remotesensing-16-04354" class="html-bibr">6</a>].</p>
Full article ">Figure 3
<p>The DEM model was generated from the Chang’E-1 laser altimeter elevation data [<a href="#B14-remotesensing-16-04354" class="html-bibr">14</a>]. (<b>a</b>) A Digital Elevation Model (DEM) of the entire lunar surface created using data from the CE-1 satellite laser altimeter. (<b>b</b>) A DEM model of the lunar surface in the region of the South Pole, spanning from 60°S to 90°S latitude. (<b>c</b>) A DEM model of the lunar surface in the region of the North Pole, spanning from 60°N to 90°N latitude.</p>
Full article ">Figure 4
<p>The diagram of the L3DIS [<a href="#B19-remotesensing-16-04354" class="html-bibr">19</a>,<a href="#B20-remotesensing-16-04354" class="html-bibr">20</a>]: the red and green arrows represent the emitted and returned laser.</p>
Full article ">Figure 5
<p>Schematic diagram of double galvanometers: (<b>a</b>) dual mirrors with rotation axis; (<b>b</b>) 16-beam spots in far-field [<a href="#B19-remotesensing-16-04354" class="html-bibr">19</a>,<a href="#B20-remotesensing-16-04354" class="html-bibr">20</a>].</p>
Full article ">Figure 6
<p>The geometric model of the L3DIS [<a href="#B21-remotesensing-16-04354" class="html-bibr">21</a>].</p>
Full article ">Figure 7
<p>The measurement results of the L3DIS during the landing process of Chang’E-3 [<a href="#B21-remotesensing-16-04354" class="html-bibr">21</a>]: the white circles represent low-lying areas and black shapes represent craters.</p>
Full article ">Figure 8
<p>Diagram of landing trajectory. The velocity is imported into GNC at an altitude of about 2~3 km [<a href="#B38-remotesensing-16-04354" class="html-bibr">38</a>].</p>
Full article ">Figure 9
<p>Diagram of the three channels’ directions. Channel 1 (CH1) is towards the nadir [<a href="#B38-remotesensing-16-04354" class="html-bibr">38</a>].</p>
Full article ">Figure 10
<p>The diagrams of the system [<a href="#B38-remotesensing-16-04354" class="html-bibr">38</a>]: (<b>a</b>) modulated system and (<b>b</b>) non-modulated system, the three red circles represent three fiber optic circulators, the numerical order 1, 2, and 3 represent three ports, red and black arrows represent the reception and firing of the signals, and other connection curves represent the optical fibers. Black arrows indicate outgoing laser and red arrows indicate returning signals.</p>
Full article ">Figure 11
<p>Modulated waveform.</p>
Full article ">Figure 12
<p>Comparison of the Doppler lidar with POS at an altitude of 4.2 km. (<b>a</b>) Six thousand points of data; (<b>b</b>) local display of 3600–3900 points from the 6000 points [<a href="#B38-remotesensing-16-04354" class="html-bibr">38</a>].</p>
Full article ">Figure 13
<p>STMD invested in the development of NDL through four programs over several years (<a href="https://www.nasa.gov/directorates/stmd/impact-story-navigation-doppler-lidar/" target="_blank">https://www.nasa.gov/directorates/stmd/impact-story-navigation-doppler-lidar/</a>, accessed on 24 March 2023).</p>
Full article ">
16 pages, 1589 KiB  
Article
A Two-Phase Deep Learning Approach to Link Quality Estimation for Multiple-Beam Transmission
by Mun-Suk Kim
Electronics 2024, 13(22), 4561; https://doi.org/10.3390/electronics13224561 - 20 Nov 2024
Viewed by 288
Abstract
In the multi-user multiple-input-multiple-output (MU-MIMO) beamforming (BF) training defined by the 802.11ay standard, since a single initiator transmits a significant number of action frames to multiple responders, inefficient configuration of the transmit antenna arrays when sending these action frames increases the signaling and [...] Read more.
In the multi-user multiple-input-multiple-output (MU-MIMO) beamforming (BF) training defined by the 802.11ay standard, since a single initiator transmits a significant number of action frames to multiple responders, inefficient configuration of the transmit antenna arrays when sending these action frames increases the signaling and latency overheads of MU-MIMO BF training. To configure appropriate transmit antenna arrays for transmitting action frames, the initiator needs to accurately estimate the signal to noise ratios (SNRs) measured at the responders for each configuration of the transmit antenna arrays. In this paper, we propose a two-phase deep learning approach to improve the accuracy of SNR estimation for multiple concurrent beams by reducing the measurement errors of the SNRs for individual single beams when each action frame is transmitted through multiple concurrent beams. Through simulations, we demonstrated that our proposed scheme enabled more responders to successfully receive action frames during MU-MIMO BF training compared to existing schemes. Full article
(This article belongs to the Special Issue Digital Signal Processing and Wireless Communication)
Show Figures

Figure 1

Figure 1
<p>The (<b>a</b>) SISO phase and (<b>b</b>) MIMO phase of MU-MIMO BF training specified in the 802.11ay standard.</p>
Full article ">Figure 2
<p>Control mode transmitter block diagram for transmission of action frames during the MIMO phase of MU-MIMO BF training.</p>
Full article ">Figure 3
<p>Overall procedure of our TDMT scheme.</p>
Full article ">Figure 4
<p>Implementation of simulation programs with MATLAB and python 3.10 [<a href="#B27-electronics-13-04561" class="html-bibr">27</a>].</p>
Full article ">Figure 5
<p>Visualization of our evaluation scenarios.</p>
Full article ">Figure 6
<p>Duration of the MIMO phase in the scenarios (<b>a</b>) lecture room and (<b>b</b>) L-shaped room.</p>
Full article ">Figure 7
<p>Probability of an STA being unable to transmit the BF feedback frame for the scenarios (<b>a</b>) lecture room and (<b>b</b>) L-shaped room.</p>
Full article ">Figure 8
<p>Root mean squared error for the SNRs measured during the SISO phase in the scenarios (<b>a</b>) lecture room and (<b>b</b>) L-shaped room.</p>
Full article ">Figure 9
<p>Root mean squared error between the SNRs estimated during the MIMO phase and the actual measured SNRs in the two propagation scenarios.</p>
Full article ">
16 pages, 10376 KiB  
Article
Machine Vision-Based Real-Time Monitoring of Bridge Incremental Launching Method
by Haibo Xie, Qianyu Liao, Lei Liao and Yanghang Qiu
Sensors 2024, 24(22), 7385; https://doi.org/10.3390/s24227385 - 20 Nov 2024
Viewed by 306
Abstract
With the wide application of the incremental launching method in bridges, the demand for real-time monitoring of launching displacement during bridge incremental launching construction has emerged. In this paper, we propose a machine vision-based real-time monitoring method for the forward displacement and lateral [...] Read more.
With the wide application of the incremental launching method in bridges, the demand for real-time monitoring of launching displacement during bridge incremental launching construction has emerged. In this paper, we propose a machine vision-based real-time monitoring method for the forward displacement and lateral offset of bridge incremental launching in which the linear shape of the bottom surface of the girder is a straight line. The method designs a kind of cross target, and realizes efficient detection, recognition, and tracking of multiple targets during the dynamic process of beam incremental launching by training a YOLOv5 target detection model and a DeepSORT multi-target tracking model. Then, based on the convex packet detection and K-means clustering algorithm, the pixel coordinates of the center point of each target are calculated, and the position change of the beam is monitored according to the change in the center-point coordinates of the targets. The feasibility and effectiveness of the proposed method are verified by comparing the accuracy of the total station and the method through laboratory simulation tests and on-site real-bridge testing. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Launching of Iowa River Bridge Pier 4; (<b>b</b>) launching of the first girder pair for the Park Bridge.</p>
Full article ">Figure 2
<p>(<b>a</b>) Camera linear model; (<b>b</b>) image coordinate to pixel coordinate (3D-2D).</p>
Full article ">Figure 3
<p>Camera calibration by MATLAB 2024.</p>
Full article ">Figure 4
<p>Cross target.</p>
Full article ">Figure 5
<p>Schematic diagram of visual measurement system.</p>
Full article ">Figure 6
<p>Derivation of the initial position relation of the target.</p>
Full article ">Figure 7
<p>Network structure of YOLOv5s.</p>
Full article ">Figure 8
<p>Partial data enhancement effect diagram: (<b>a</b>) Cutout; (<b>b</b>) Synthetic Fog Enhancement; (<b>c</b>) Luminance; (<b>d</b>) Motion Blur.</p>
Full article ">Figure 9
<p>YOLOv5 target detection model positioning effect diagram.</p>
Full article ">Figure 10
<p>Cross target tracking results.</p>
Full article ">Figure 11
<p>Center-point calculation process.</p>
Full article ">Figure 12
<p>Step-by-step effect of the center-point solution process: (<b>a</b>) cross target; (<b>b</b>) binary image; (<b>c</b>) edge contours; (<b>d</b>) bump detection; (<b>e</b>) convex packet vertex clustering; (<b>f</b>) Region 1; (<b>g</b>) Region 2; (<b>h</b>) straight-line fitting for intersection.</p>
Full article ">Figure 13
<p>Plank simulation test setup.</p>
Full article ">Figure 14
<p>Schematic diagram of the lateral offset and forward displacement of the measured point.</p>
Full article ">Figure 15
<p>Comparison of displacement measurement results for measurement point 1: (<b>a</b>) comparison of lateral offset measurements at measurement point 1; (<b>b</b>) comparison of incremental launching forward displacement measurements at measurement point 1.</p>
Full article ">Figure 16
<p>Visual measurement of displacement results.</p>
Full article ">Figure 17
<p>Lixizhou Bridge.</p>
Full article ">Figure 18
<p>Field test layout.</p>
Full article ">Figure 19
<p>Comparison of incremental launching displacement measurements: (<b>a</b>) comparison of lateral offset measurements; (<b>b</b>) comparison of incremental launching forward displacement measurements.</p>
Full article ">Figure 20
<p>Visual measurement results for a time period: (<b>a</b>) forward displacement visual measurements; (<b>b</b>) results of the visual measurement of the lateral offset.</p>
Full article ">Figure 20 Cont.
<p>Visual measurement results for a time period: (<b>a</b>) forward displacement visual measurements; (<b>b</b>) results of the visual measurement of the lateral offset.</p>
Full article ">
38 pages, 8036 KiB  
Review
Overview of High-Performance Timing and Position-Sensitive MCP Detectors Utilizing Secondary Electron Emission for Mass Measurements of Exotic Nuclei at Nuclear Physics Facilities
by Zhuang Ge
Sensors 2024, 24(22), 7261; https://doi.org/10.3390/s24227261 - 13 Nov 2024
Viewed by 556
Abstract
Timing and/or position-sensitive MCP detectors, which detect secondary electrons (SEs) emitted from a conversion foil during ion passage, are widely utilized in nuclear physics and nuclear astrophysics experiments. This review covers high-performance timing and/or position-sensitive MCP detectors that use SE emission for mass [...] Read more.
Timing and/or position-sensitive MCP detectors, which detect secondary electrons (SEs) emitted from a conversion foil during ion passage, are widely utilized in nuclear physics and nuclear astrophysics experiments. This review covers high-performance timing and/or position-sensitive MCP detectors that use SE emission for mass measurements of exotic nuclei at nuclear physics facilities, along with their applications in new measurement schemes. The design, principles, performance, and applications of these detectors with different arrangements of electromagnetic fields are summarized. To achieve high precision and accuracy in mass measurements of exotic nuclei using time-of-flight (TOF) and/or position (imaging) measurement methods, such as high-resolution beam-line magnetic-rigidity time-of-flight (Bρ-TOF) and in-ring isochronous mass spectrometry (IMS), foil-MCP detectors with high position and timing resolution have been introduced and simulated. Beyond TOF mass measurements, these new detector systems are also described for use in heavy ion beam trajectory monitoring and momentum measurements for both beam-line and in-ring applications. Additionally, the use of position-sensitive timing foil-MCP detectors for Penning trap mass spectrometers and multi-reflection time-of-flight (MR-TOF) mass spectrometers is proposed and discussed to improve efficiency and enhance precision. Full article
(This article belongs to the Special Issue Particle Detector R&D: Design, Characterization and Applications)
Show Figures

Figure 1

Figure 1
<p>Schematic overview of the foil-MCP detectors: (<b>a</b>) Mirror-type electrostatic foil-MCP detector. (<b>b</b>) Direct projection electrostatic foil-MCP detector. (<b>c</b>) Electrostatic-lens foil-MCP detector. (<b>d</b>) Magnetic field and electrostatic field parallelly arranged foil-MCP detector. (<b>e</b>) Magnetic field and electrostatic field crossly arranged foil-MCP detector. The trajectory of the SEs are from simulations with SIMION.</p>
Full article ">Figure 2
<p>Schematic diagram of the working principle of foil-MCP detectors. (<b>a</b>) Schematic view of the trajectories of SEs from the conversion foil to the MCP detector for the electrostatic mirror detector [<a href="#B36-sensors-24-07261" class="html-bibr">36</a>]. (<b>b</b>) Schematic diagram illustrating the principle of a B‖E-MCP detector. The trajectories of the SEs in a magnetic field that changes gradually from a strong field to a weaker uniform field is modified from [<a href="#B102-sensors-24-07261" class="html-bibr">102</a>]. (<b>c</b>) This setup represents a cross-type B×E-MCP detector. Heavy ions travel along the positive z-axis, the electric field is oriented along the negative z-axis, and the magnetic field is reversed along the positive y-axis [<a href="#B103-sensors-24-07261" class="html-bibr">103</a>].</p>
Full article ">Figure 3
<p>(<b>a</b>) Five-point imaging of SEs from the foil onto the MCP surface in the X-Z and X-Y views during the simulation. The comparison of X-coordinate position (<b>b</b>), Y-coordinate position (<b>c</b>), and timing (<b>d</b>) resolutions for detectors with different dimensions (120 mm × 120 mm and 240 mm × 240 mm for the triangular structure). The HV settings for the different plates were identical, and the accelerating HV values were all negative in the simulation [<a href="#B36-sensors-24-07261" class="html-bibr">36</a>].</p>
Full article ">Figure 4
<p>(Upper panel) Simulation of SE trajectory in an electrostatic-lens MCP detector. The lower panel illustrates the comparison of the Y-direction (<b>a</b>), Z-direction (<b>b</b>) position resolution, and timing (<b>c</b>) resolution as a function of the position at the foil for the timing side (depicted as TOF side in the legend) and position-sensitive side (depicted as position side in the legend), respectively [<a href="#B38-sensors-24-07261" class="html-bibr">38</a>].</p>
Full article ">Figure 5
<p>(<b>a</b>) The B×E foil-MCP detector designed to provide both timing and one-dimensional positional sensitivity [<a href="#B37-sensors-24-07261" class="html-bibr">37</a>]. (<b>b</b>) The positional data (<span class="html-italic">x</span>/mm) for ions during each revolution are recorded by a position-sensitive detector that experiences energy loss (using a foil-MCP detector) within the storage ring [<a href="#B103-sensors-24-07261" class="html-bibr">103</a>]. The error bars for each data point (in bule) in the simulation account for a resolving power of 1 mm (<math display="inline"><semantics> <mi>σ</mi> </semantics></math>) for the foil-MCP detector. A function: <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>=</mo> <msub> <mi>x</mi> <mn>0</mn> </msub> <mo>+</mo> <mi>a</mi> <mo>·</mo> <mi>e</mi> <mi>x</mi> <mi>p</mi> <mrow> <mo>(</mo> <mo>−</mo> <mi>b</mi> <mo>·</mo> <mi>T</mi> <mo>)</mo> </mrow> <mo>·</mo> <mi>s</mi> <mi>i</mi> <mi>n</mi> <mrow> <mo>(</mo> <mn>2</mn> <mi>π</mi> <mo>·</mo> <mrow> <mo>(</mo> <mi>T</mi> <mo>−</mo> <msub> <mi>T</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> <mo>/</mo> <mi>ω</mi> <mo>)</mo> </mrow> <mo>+</mo> <mi>c</mi> <mo>·</mo> <mi>T</mi> </mrow> </semantics></math>, is used to fit (in red) the betatron oscillation (see <a href="#sec5dot3-sensors-24-07261" class="html-sec">Section 5.3</a> for details). (<b>c</b>) The positional and angular data (<span class="html-italic">x</span>,<math display="inline"><semantics> <msup> <mi>x</mi> <mo>′</mo> </msup> </semantics></math>) for ions during each revolution are captured by a position-sensitive detector that functions without any degradation (for example, using a Schottky pickup as the probe) within a storage ring. (<b>d</b>) The positional and angular data (<span class="html-italic">x</span>,<math display="inline"><semantics> <msup> <mi>x</mi> <mo>′</mo> </msup> </semantics></math>) for ions per revolution obtained by a position-sensitive detector that experiences energy loss (utilizing a foil-MCP detector) within the storage ring [<a href="#B103-sensors-24-07261" class="html-bibr">103</a>]. The ions being simulated in this scenario are <sup>38</sup><span class="html-italic">K</span><sup>19+</sup> with an energy level of approximately 200 MeV/nucleon. The simulation is based on the COSY [<a href="#B106-sensors-24-07261" class="html-bibr">106</a>] and MOCADI [<a href="#B107-sensors-24-07261" class="html-bibr">107</a>] software packages, developed at MSU and GSI.</p>
Full article ">Figure 6
<p>Simulation of SE trajectories in the absence of magnetic field (<b>a</b>) and magnetic field (<b>b</b>) [<a href="#B85-sensors-24-07261" class="html-bibr">85</a>]. The SEs of the detector are emitted from the conversion film of the detector. The implementation of an additional magnetic field can significantly improve the confinement of the ion’s position.</p>
Full article ">Figure 7
<p>(<b>a</b>) shows the schematic cross-sectional view of the setup for the calibration of the DLD system. (<b>b</b>) indicates the 3D imaging principle of the calibration setup [<a href="#B36-sensors-24-07261" class="html-bibr">36</a>].</p>
Full article ">Figure 8
<p>(<b>a</b>) CAD drawing of the effective area featuring <math display="inline"><semantics> <mi>ϕ</mi> </semantics></math>0.5 mm or <math display="inline"><semantics> <mi>ϕ</mi> </semantics></math>1 mm holes on the calibration mask. (<b>b</b>) Two-dimensional spectrum of raw signal imaging, displayed as a contour plot, based on the time difference in the x- and y-directions. (<b>c</b>) Calibrated two-dimensional position imaging spectrum of collimated ions passing through the mask, shown with contour display [<a href="#B36-sensors-24-07261" class="html-bibr">36</a>].</p>
Full article ">Figure 9
<p>Fitting residual dependence of position for first-order calibration (<b>a</b>) and third-order calibration with cross terms (<b>b</b>) in fitting [<a href="#B36-sensors-24-07261" class="html-bibr">36</a>].</p>
Full article ">Figure 10
<p>The vector field map of the correction vector <math display="inline"><semantics> <mover accent="true"> <msub> <mi>V</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>→</mo> </mover> </semantics></math>, derived from the difference between the expected and the calibrated/measured mean values for each hole center spot. (<b>a</b>) displays a first-order correction map, while (<b>b</b>) shows a higher-order correction map. For better visibility, the lengths (magnitudes) of the vectors are enlarged by a factor of 5. The edge of the active area of the DLD is indicated by the dashed blue circle [<a href="#B36-sensors-24-07261" class="html-bibr">36</a>].</p>
Full article ">Figure 11
<p>Schematic view of detector arrangement of the offline (<b>a</b>) experiment with <math display="inline"><semantics> <mi>α</mi> </semantics></math> source and online (<b>b</b>) experiment with heavy-ion beams [<a href="#B36-sensors-24-07261" class="html-bibr">36</a>].</p>
Full article ">Figure 12
<p>(<b>a</b>) The overall detection efficiency of the electrostatic MCP detector plotted against the deflection potential for ions of <sup>84</sup><sup>Kr36+</sup>. The ratio between the deflection and accelerating potentials is consistently maintained at approximately 0.79. (<b>b</b>) The efficiency of detecting <math display="inline"><semantics> <mi>α</mi> </semantics></math> particles emitted from a <sup>241</sup>Am source. The accelerating potential is held constant at −6000 V, while the deflection potential is adjusted throughout the process [<a href="#B36-sensors-24-07261" class="html-bibr">36</a>].</p>
Full article ">Figure 13
<p>(<b>a</b>) shows the measured position distribution of beams by the electrostatic MCP detector. (<b>b</b>) indicates the local detection efficiency distribution of the detector [<a href="#B36-sensors-24-07261" class="html-bibr">36</a>].</p>
Full article ">Figure 14
<p>(<b>a</b>) Position resolution comparison of offline results (Mylar foil in “electron mode”) to simulation results as a function of the accelerating potential by keeping the ratio of accelerating potential and the deflection potential at ∼0.778. (<b>b</b>) Uncertainty of position measurement difference of the PPACs and the E-MCP detector as a function of accelerating potential. (<b>c</b>) Uncertainty of position measurement difference subtracted with the resolution of the PPAC system (assuming a resolution of 1 mm for two dimensions) as a function of accelerating potential [<a href="#B36-sensors-24-07261" class="html-bibr">36</a>].</p>
Full article ">Figure 15
<p>(<b>a</b>) shows the imaging of collimated <math display="inline"><semantics> <mi>α</mi> </semantics></math> particles from three holes on a mask placed in front of the foil. (<b>b</b>,<b>c</b>) display the X- and Y-coordinate projections of the imaging from one hole. The Gaussian fitting parameter “sigma” of the peak is used to characterize the resolutions (X: 1.108 mm, Y: 1.098 mm). The deviations between the imaging points on the MCP detector and their corresponding physical positions on the mask are smaller than 1<math display="inline"><semantics> <mi>σ</mi> </semantics></math> uncertainty (the resolution) of the measurements [<a href="#B36-sensors-24-07261" class="html-bibr">36</a>].</p>
Full article ">Figure 16
<p>The potential completion status of the foil-MCP detector installation at HFRS. Dual foil-MCP detectors, indicated by purple blocks, could be used for measuring the position, angle, and timing of arrival of RIs at the foci (PF4, MF1-4) of HFRS on an event-by-event basis [<a href="#B38-sensors-24-07261" class="html-bibr">38</a>]. The ‘PS-T D’ refers to the position-sensitive timing detector.</p>
Full article ">Figure A1
<p>(<b>a</b>) shows the X, Y positions of the SEs deflected onto the MCP by the electrostatic detector deduced from the collimated <math display="inline"><semantics> <mi>α</mi> </semantics></math> source at position (30 mm, 0 mm) on a mask with a hole size of ∼0.5 mm in diameter, as a function of outer mirror potential while keeping the accelerating grid HV of −6000 V. (<b>b</b>) displays the X, Y positions of the imaging of the same collimated hole as a function of accelerating potential by keeping the ratio of potential of the deflection potential to accelerating potential as a constant [<a href="#B36-sensors-24-07261" class="html-bibr">36</a>].</p>
Full article ">
14 pages, 6578 KiB  
Article
Research on the Method of Depth-Sensing Optical System Based on Multi-Layer Interface Reflection
by Chen Yu, Ying Liu, Linhan Li, Guangpeng Zhou, Boshi Dang, Jie Du, Junlin Ma and Site Zhang
Sensors 2024, 24(22), 7228; https://doi.org/10.3390/s24227228 - 12 Nov 2024
Viewed by 463
Abstract
In this paper, a depth-sensing method employing active irradiation of a semi-annular beam is proposed for observing the multi-layered reflective surfaces of transparent samples with higher resolutions and lower interference. To obtain the focusing resolution of the semi-annular aperture diaphragm system, a model [...] Read more.
In this paper, a depth-sensing method employing active irradiation of a semi-annular beam is proposed for observing the multi-layered reflective surfaces of transparent samples with higher resolutions and lower interference. To obtain the focusing resolution of the semi-annular aperture diaphragm system, a model for computing the diffracted optical energy distribution of an asymmetric aperture diaphragm is constructed, and mathematical formulas are deduced for determining the system resolution based on the position of the first dark ring of the amplitude distribution. Optical simulations were performed under specific conditions; the lateral resolution δr of the depth-sensing system was determined to be 0.68 μm, and the focusing accuracy δz was determined to be 0.60 μm. An experimental platform was established under the same conditions, and the results were in accord with those of the simulation results, which validated the correctness of the formula for calculating the amplitude distribution of the diffracted light from the asymmetric aperture diaphragm. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

Figure 1
<p>Focus map of cells at different levels of the pine stem.</p>
Full article ">Figure 2
<p>Schematic diagram of the principle of fast depth-sensing with multi-layer transparent surface.</p>
Full article ">Figure 3
<p>Computational modeling of diffracted light fields. (<b>a</b>) is a schematic diagram of the computational model for the diffracted light field distribution of a semi-circular aperture diaphragm. (<b>b</b>) is a schematic diagram of the computational model for the diffracted light field distribution of a semi-annular aperture diaphragm.</p>
Full article ">Figure 4
<p>Far-field diffraction amplitude profiles of semi-circular and semi-annular aperture diaphragms at different <span class="html-italic">N</span> values. (<b>a</b>–<b>e</b>) are the far-field diffractograms of semi-circular aperture diaphragms obtained when <span class="html-italic">N</span> = 1, <span class="html-italic">N</span> = 10, <span class="html-italic">N</span> = 50, <span class="html-italic">N</span> = 100, and <span class="html-italic">N</span> = 1000, respectively. (<b>f</b>–<b>j</b>) are also the far-field diffractograms of semi-annular aperture diaphragms obtained at <span class="html-italic">N</span> = 1, <span class="html-italic">N</span> = 10, <span class="html-italic">N</span> = 50, <span class="html-italic">N</span> = 100, and <span class="html-italic">N</span> = 1000, respectively.</p>
Full article ">Figure 5
<p>Amplitude distribution curve of half-ring aperture diaphragm. Plot of the far-field diffraction amplitude distribution from the semi-annular aperture diaphragm. (<b>a</b>–<b>d</b>) are three-dimensional plots of the amplitude distribution obtained at <span class="html-italic">τ</span> = 0, <span class="html-italic">τ</span> = 0.2, <span class="html-italic">τ</span> = 0.5, and <span class="html-italic">τ</span> = 0.7, respectively.</p>
Full article ">Figure 6
<p>Structure of the optical path of the semi-annular aperture diaphragm depth-sensing system.</p>
Full article ">Figure 7
<p>Schematic diagram of the distribution of far-field diffracted light field from a semi-annular aperture diaphragm. (<b>a</b>) is a light energy simulation diagram when focused. (<b>b</b>) is a simulation diagram of relative irradiance.</p>
Full article ">Figure 8
<p>List of detector points for the reflecting surface at the confocal position and the reflecting surface at the defocused <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>δ</mi> </mrow> <mrow> <mi>z</mi> </mrow> </msub> </mrow> </semantics></math> position.</p>
Full article ">Figure 9
<p>Schematic diagram of the experimental setup. (<b>a</b>) is the overall setup diagram of the experiment. (<b>b</b>) is the interior view of the depth-sensing system device. (<b>c</b>) is a simulation diagram of the parts of the depth-sensing system device.</p>
Full article ">Figure 10
<p>Spot diagram at <span class="html-italic">z</span> = 0 nm and its energy analysis. (<b>a</b>) is the spot at the focal point. (<b>b</b>) is an energy-analyzed three-dimensional diagram of the spot at the focal point.</p>
Full article ">Figure 11
<p>Plot of spot energy analysis at <span class="html-italic">z</span> = 0 nm and <span class="html-italic">z</span> = 610 nm. (<b>a</b>) is an energy-analyzed three-dimensional diagram of the spot at <span class="html-italic">z</span> = 0 nm. (<b>b</b>) is an energy-analyzed three-dimensional diagram of the spot at <span class="html-italic">z</span> = 610 nm.</p>
Full article ">Figure 12
<p>Schematic diagram of the three reflective surfaces and their spot maps. (<b>a</b>) is a schematic diagram of how the experimental setup is constructed. (<b>b</b>) is a graph of the experimental results obtained.</p>
Full article ">
22 pages, 5456 KiB  
Article
Computer-Vision-Aided Deflection Influences Line Identification of Concrete Bridge Enhanced by Edge Detection and Time-Domain Forward Inference
by Jianfeng Chen, Long Zhao, Yuliang Feng and Zhiwei Chen
Buildings 2024, 14(11), 3537; https://doi.org/10.3390/buildings14113537 - 5 Nov 2024
Viewed by 501
Abstract
To enhance the accuracy and efficiency of the deflection response measurement of concrete bridges with a non-contact scheme and address the ill-conditioned nature of the inverse problem in influence line (IL) identification, this study introduces a computer-vision-aided deflection IL identification method that integrates [...] Read more.
To enhance the accuracy and efficiency of the deflection response measurement of concrete bridges with a non-contact scheme and address the ill-conditioned nature of the inverse problem in influence line (IL) identification, this study introduces a computer-vision-aided deflection IL identification method that integrates edge detection and time-domain forward inference (TDFI). The methodology proposed in this research leverages computer vision technology with edge detection to surpass traditional contact-based measurement methods, greatly enhancing the operational efficiency and applicability of IL identification and, in particular, addressing the challenge of accurately measuring small deflections in concrete bridges. To mitigate the limitations of the Lucas–Kanade (LK) optical flow method, such as unclear feature points within the camera’s field of view and occasional point loss in certain video frames, an edge detection technique is employed to identify maximum values in the first-order derivatives of the image, creating virtual tracking points at the bridge edges through image processing. By precisely defining the bridge boundaries, only the essential structural attributes are preserved to enhance the reliability of minimal deflection deformations under vehicular loads. To tackle the ill-posed nature of the inverse problem, a TDFI model is introduced to identify IL, recursively capturing the static bridge response generated by the bridge under the influence of successive axles of a multi-axle vehicle. The IL is then computed by dividing the response by the weight of the preceding axle. Furthermore, an axle weight ratio reduction coefficient is proposed to mitigate noise amplification issues, ensuring that the weight of the preceding axle surpasses that of any other axle. To validate the accuracy and robustness of the proposed method, it is applied to numerical examples of a simply supported concrete beam, indoor experiments on a similar beam, and field tests on a three-span continuous concrete beam bridge. Full article
(This article belongs to the Special Issue Study on Concrete Structures)
Show Figures

Figure 1

Figure 1
<p>Mid-span of beam: (<b>a</b>) original image; (<b>b</b>) edge detection result; (<b>c</b>) computer version system calibration.</p>
Full article ">Figure 2
<p>Establishment of virtual tracking points.</p>
Full article ">Figure 3
<p>Static bridge response induced by a vehicle with two axles.</p>
Full article ">Figure 4
<p>Flowchart of the proposed IL identification method.</p>
Full article ">Figure 5
<p>A simply supported beam.</p>
Full article ">Figure 6
<p>IL identification results analysis for reduction coefficient.</p>
Full article ">Figure 7
<p>Bridge response: (<b>a</b>) Scenario 1; (<b>b</b>) Scenario 2.</p>
Full article ">Figure 8
<p>Noise effect analysis.</p>
Full article ">Figure 9
<p>Step size effect analysis.</p>
Full article ">Figure 10
<p>Overview of the case.</p>
Full article ">Figure 11
<p>Beam boundary points used for LK optical flow tracking.</p>
Full article ">Figure 12
<p>Dynamic bridge deflection response: (<b>a</b>) without edge detection (Test 1); (<b>b</b>) with edge detection (Test 2).</p>
Full article ">Figure 13
<p>Illustration of tracking point pathway.</p>
Full article ">Figure 14
<p>IL identification results of different locations: (<b>a</b>) quarter span; (<b>b</b>) half span; (<b>c</b>) three-quarter span.</p>
Full article ">Figure 15
<p>Actual image of bridge: (<b>a</b>) bridge superstructure; (<b>b</b>) bridge substructure.</p>
Full article ">Figure 16
<p>Bridge layout: (<b>a</b>) bridge longitudinal section; (<b>b</b>) bridge cross-section.</p>
Full article ">Figure 17
<p>Actual image of vehicles.</p>
Full article ">Figure 18
<p>Dynamic bridge deflection response: (<b>a</b>) contact-based measurement; (<b>b</b>) LK optical flow method.</p>
Full article ">Figure 19
<p>IL identification results with different methods.</p>
Full article ">
18 pages, 5723 KiB  
Article
Airborne Multi-Channel Forward-Looking Radar Super-Resolution Imaging Using Improved Fast Iterative Interpolated Beamforming Algorithm
by Ke Liu, Yueli Li, Zhou Xu, Zhuojie Zhou and Tian Jin
Remote Sens. 2024, 16(22), 4121; https://doi.org/10.3390/rs16224121 - 5 Nov 2024
Viewed by 521
Abstract
Radar forward-looking imaging is critical in many civil and military fields, such as aircraft landing, autonomous driving, and geological exploration. Although the super-resolution forward-looking imaging algorithm based on spectral estimation has the potential to discriminate multiple targets within the same beam, the estimation [...] Read more.
Radar forward-looking imaging is critical in many civil and military fields, such as aircraft landing, autonomous driving, and geological exploration. Although the super-resolution forward-looking imaging algorithm based on spectral estimation has the potential to discriminate multiple targets within the same beam, the estimation of the angle and magnitude of the targets are not accurate due to the influence of sidelobe leakage. This paper proposes a multi-channel super-resolution forward-looking imaging algorithm based on the improved Fast Iterative Interpolated Beamforming (FIIB) algorithm to solve the problem. First, the number of targets and the coarse estimates of angle and magnitude are obtained from the iterative adaptive approach (IAA). Then, the accurate estimates of angle and magnitude are achieved by the strategy of iterative interpolation and leakage subtraction in FIIB. Finally, a high-resolution forward-looking image is obtained through non-coherent accumulation. The simulation results of point targets and scenes show that the proposed algorithm can distinguish multiple targets in the same beam, effectively improve the azimuthal resolution of forward-looking imaging, and attain the accurate reconstruction of point targets and the contour reconstruction of extended targets. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

Figure 1
<p>Geometry for forward-looking imaging of a scanning radar.</p>
Full article ">Figure 2
<p>Sidelobe spillover effect. (<b>a</b>) Two point targets. (<b>b</b>) One strong point target with one weak point target.</p>
Full article ">Figure 3
<p>An example of the failed FIIB algorithm.</p>
Full article ">Figure 4
<p>Estimated results of different methods for multiple point targets.</p>
Full article ">Figure 5
<p>Comparison of forward-looking imaging performance for point target simulation. (<b>a</b>) Original point targets distribution. (<b>b</b>) Real aperture imaging result. (<b>c</b>) Monopulse imaging result. (<b>d</b>) Monopulse imaging result based on Doppler estimates by using FIIB. (<b>e</b>) Multi-channel imaging result based on FIIB. (<b>f</b>) Multi-channel imaging result based on improved FIIB.</p>
Full article ">Figure 5 Cont.
<p>Comparison of forward-looking imaging performance for point target simulation. (<b>a</b>) Original point targets distribution. (<b>b</b>) Real aperture imaging result. (<b>c</b>) Monopulse imaging result. (<b>d</b>) Monopulse imaging result based on Doppler estimates by using FIIB. (<b>e</b>) Multi-channel imaging result based on FIIB. (<b>f</b>) Multi-channel imaging result based on improved FIIB.</p>
Full article ">Figure 6
<p>The normalized profiles for point targets at the range cell 1670 m.</p>
Full article ">Figure 7
<p>Comparison of forward-looking imaging performance for scene simulation. (<b>a</b>) Original Ku-band SAR map. (<b>b</b>) Real aperture imaging result. (<b>c</b>) Monopulse imaging result. (<b>d</b>) Monopulse imaging result based on Doppler estimates using FIIB. (<b>e</b>) Multi-channel imaging result based on FIIB. (<b>f</b>) Multi-channel imaging result based on the proposed FIIB.</p>
Full article ">Figure 7 Cont.
<p>Comparison of forward-looking imaging performance for scene simulation. (<b>a</b>) Original Ku-band SAR map. (<b>b</b>) Real aperture imaging result. (<b>c</b>) Monopulse imaging result. (<b>d</b>) Monopulse imaging result based on Doppler estimates using FIIB. (<b>e</b>) Multi-channel imaging result based on FIIB. (<b>f</b>) Multi-channel imaging result based on the proposed FIIB.</p>
Full article ">
7 pages, 6169 KiB  
Article
Threshold Gain Reduction in Tandem Semiconductor Nano-Lasers
by Yuanlong Fan, Jing Zhang and K. Alan Shore
Photonics 2024, 11(11), 1037; https://doi.org/10.3390/photonics11111037 - 5 Nov 2024
Viewed by 416
Abstract
It is shown that a significant reduction in the threshold gain of electrically pumped semiconductor nano-lasers may be achieved in bridge-connected tandem semiconductor nano-lasers. Optimization of the design is achieved by exploring the impact of bridge length and width on the threshold gain. [...] Read more.
It is shown that a significant reduction in the threshold gain of electrically pumped semiconductor nano-lasers may be achieved in bridge-connected tandem semiconductor nano-lasers. Optimization of the design is achieved by exploring the impact of bridge length and width on the threshold gain. In addition, a detailed examination is also made of the emission patterns of the structure. It is found that a trade-off emerges between threshold gain and beam quality where multi-lobed far field emission may be associated with the lowest threshold gains. Full article
Show Figures

Figure 1

Figure 1
<p>The schematic illustration of the uncoupled and coupled nano-laser structures. (<b>a</b>) A 3D schematic of uncoupled nano-lasers. (<b>b</b>) A 3D schematic of coupled nano-lasers. (<b>c</b>) Cross-sectional view of coupled nano-lasers.</p>
Full article ">Figure 2
<p>Electric field intensity diagram of a single nano-laser, uncoupled nano−laser, and coupled nano−laser. (<b>a</b>–<b>c</b>) Horizontal cross−section. (<b>d</b>–<b>f</b>) Vertical cross−section.</p>
Full article ">Figure 3
<p>Far-field diagram of a single nano-laser, uncoupled nano-laser, and coupled nano-laser. (<b>a</b>–<b>c</b>) A 3D far-field diagram. (<b>d</b>–<b>f</b>) Two-dimensional polar coordinates far-field diagram.</p>
Full article ">Figure 4
<p>Effect of bridge length and width on threshold gain.</p>
Full article ">Figure 5
<p>The electric field intensity diagram, 3D far-field distribution, and 2-dimensional polar coordinates far-field diagram of coupled nano-lasers with the increase in the width of the bridge (widths of 150 nm, 200 nm, 250 nm, 300 nm, 350 nm, and 400 nm) when l<sub>bridge</sub> = 1175 nm. The first column is the electric field intensity diagram. The second column is a 3D far-field diagram. The third column is a 2-dimensional polar coordinates far-field diagram.</p>
Full article ">Figure 6
<p>The electric field intensity diagram, 3D far-field distribution, and 2-dimensional polar coordinates far-field diagram of coupled nano-lasers with the increase in the length of the bridge (the length of 300 nm, 475 nm, 650 nm, 825 nm, 1000 nm, 1175 nm) when w<sub>bridge</sub> = 100 nm. The first column is the electric field intensity diagram. The second column is a 3D far-field diagram. The third column is a 2-dimensional polar coordinate far-field diagram.</p>
Full article ">
11 pages, 5565 KiB  
Article
Optical Calibration of a Multi-Color Ellipsometric Mapping Tool Fabricated Using Cheap Parts
by Berhane Nugusse Zereay, Sándor Kálvin, György Juhász, Csaba Major, Péter Petrik, Zoltán György Horváth and Miklós Fried
Photonics 2024, 11(11), 1036; https://doi.org/10.3390/photonics11111036 - 4 Nov 2024
Viewed by 628
Abstract
We developed and applied a new calibration method to make more accurate measurements with our multi-color ellipsometric mapping tool made from cheap parts. Ellipsometry is an optical technique that measures the relative change in the polarization state of the measurement beam induced by [...] Read more.
We developed and applied a new calibration method to make more accurate measurements with our multi-color ellipsometric mapping tool made from cheap parts. Ellipsometry is an optical technique that measures the relative change in the polarization state of the measurement beam induced by reflection from or transmission through a sample. During conventional ellipsometric measurement, the data collection is relatively slow and measures one spot at a time, so mapping needs a long time compared with our new optical mapping equipment made by an ordinary color LED monitor and a polarization-sensitive camera. The angle of incidence and the incident polarization state is varied point by point, so a special optical calibration method is needed. Three SiO2 samples with different thicknesses were used for the point-by-point determination of the angle of incidence and rho (ρ) corrections. After the calibration, another SiO2 sample was measured and analyzed using the calibrated corrections; further, this sample was independently measured using a conventional spectroscopic ellipsometer. The difference between the two measured thickness maps is less than 1 nm. Our optical mapping tool made from cheap parts is faster and covers wider area samples relative to conventional ellipsometers, and these correction enhancements further demonstrate its performance. Full article
(This article belongs to the Special Issue Polarization Optics)
Show Figures

Figure 1

Figure 1
<p>Working principle of spectroscopic ellipsometry (source: <a href="https://www.jawoollam.com/resources/ellipsometry-tutorial/what-is-ellipsometry" target="_blank">https://www.jawoollam.com/resources/ellipsometry-tutorial/what-is-ellipsometry</a> (Accessed on 30 September 2024)).</p>
Full article ">Figure 2
<p>Schematics of the non-collimated beam ellipsometer (optical mapping tool made from cheap parts). (1) Light source. (2) Vertical polarizer. (3) Liquid crystal cell. (4) Horizontal polarizer. (5) Sample. (6) Sample holder. (7) Pinhole. (8) Camera sensor.</p>
Full article ">Figure 3
<p>Schematic diagram of the optical interference in an ambient/thin-film/substrate optical model [<a href="#B4-photonics-11-01036" class="html-bibr">4</a>].</p>
Full article ">Figure 4
<p>Position order of a SiO<sub>2</sub> sample on six different positions at different places. (<b>a</b>) Sample position order model. (<b>b</b>) Sample at 3rd position.</p>
Full article ">Figure 5
<p>Schematic drawing of the direct ellipsometric measurement of monitor.</p>
Full article ">Figure 6
<p>Three-dimensional experimental results of tan ψ and cos Δ values for each color from the direct monitor measurement. Note that the x- and y-axes in our figures represent the pixel group in the sample (51 × 32) and the z-axis (color band) shows the range of measurement values in each corresponding category, depending on the type of map. Left column tan ψ, right column cos Δ maps, upper row blue color band, middle row green color band, lower row red color band.</p>
Full article ">Figure 6 Cont.
<p>Three-dimensional experimental results of tan ψ and cos Δ values for each color from the direct monitor measurement. Note that the x- and y-axes in our figures represent the pixel group in the sample (51 × 32) and the z-axis (color band) shows the range of measurement values in each corresponding category, depending on the type of map. Left column tan ψ, right column cos Δ maps, upper row blue color band, middle row green color band, lower row red color band.</p>
Full article ">Figure 7
<p>(<b>a</b>) Merged MSE full map. (<b>b</b>) Low-MSE pixel map.</p>
Full article ">Figure 8
<p>(<b>a</b>) Full angle-of-incidence map. (<b>b</b>) Angle of incidence with high-MSE pixels removed.</p>
Full article ">Figure 9
<p>Thickness maps of SiO<sub>2</sub>/Si samples with nominal thickness of 40 nm, 60 nm, and 100 nm (low-MSE areas) from the refined central 20 × 15 cm part.</p>
Full article ">Figure 10
<p>Maps of calibrated ρ<sub>monitor</sub> values. Left columns: absolute value of ρ<sub>monitor</sub>; right columns: phase shift-correction maps. Upper row: blue (450 nm) color band; Middle middle row: green (550 nm) color band; Lower lower row: red (650 nm) color band.</p>
Full article ">Figure 11
<p>(<b>a</b>) Thickness map of oxide sample with nominal thickness of 80 nm produced by Wollam M2000 SE (note that our M2000 can only map the central 14 cm diameter area of the 20 cm diameter sample). (<b>b</b>) Thickness map of the same SiO<sub>2</sub>/Si sample 20 × 15 cm area produced by the non-collimated, calibrated mapping tool.</p>
Full article ">
28 pages, 27981 KiB  
Article
Acoustic Imaging Learning-Based Approaches for Marine Litter Detection and Classification
by Pedro Alves Guedes, Hugo Miguel Silva, Sen Wang, Alfredo Martins, José Almeida and Eduardo Silva
J. Mar. Sci. Eng. 2024, 12(11), 1984; https://doi.org/10.3390/jmse12111984 - 3 Nov 2024
Viewed by 591
Abstract
This paper introduces an advanced acoustic imaging system leveraging multibeam water column data at various frequencies to detect and classify marine litter. This study encompasses (i) the acquisition of test tank data for diverse types of marine litter at multiple acoustic frequencies; (ii) [...] Read more.
This paper introduces an advanced acoustic imaging system leveraging multibeam water column data at various frequencies to detect and classify marine litter. This study encompasses (i) the acquisition of test tank data for diverse types of marine litter at multiple acoustic frequencies; (ii) the creation of a comprehensive acoustic image dataset with meticulous labelling and formatting; (iii) the implementation of sophisticated classification algorithms, namely support vector machine (SVM) and convolutional neural network (CNN), alongside cutting-edge detection algorithms based on transfer learning, including single-shot multibox detector (SSD) and You Only Look once (YOLO), specifically YOLOv8. The findings reveal discrimination between different classes of marine litter across the implemented algorithms for both detection and classification. Furthermore, cross-frequency studies were conducted to assess model generalisation, evaluating the performance of models trained on one acoustic frequency when tested with acoustic images based on different frequencies. This approach underscores the potential of multibeam data in the detection and classification of marine litter in the water column, paving the way for developing novel research methods in real-life environments. Full article
(This article belongs to the Special Issue Applications of Underwater Acoustics in Ocean Engineering)
Show Figures

Figure 1

Figure 1
<p>Marine litter in the water column. Courtesy of Unsplash by Naja Jensen.</p>
Full article ">Figure 2
<p>Kongsberg M3 Multibeam High-Frequency Echosounder system setup in the test tank. (<b>a</b>) Test tank setup, (<b>b</b>) MBES capturing the Wooden deck in the water column.</p>
Full article ">Figure 3
<p>Marine debris used for the test tank dataset. PVC Squares (1); PVC traffic cone (2); Wooden deck (3); vinyl sheet (4); fish net (5).</p>
Full article ">Figure 4
<p>High-level architecture for the MBES sensor and acoustic imaging for detection and classification problems.</p>
Full article ">Figure 5
<p>Raw acoustic images of a PVC square at the same range, with varying FOV due to the different acoustic frequencies. (<b>a</b>) Raw acoustic image of 1200 kHz, (<b>b</b>) Raw acoustic image of 1400 kHz.</p>
Full article ">Figure 6
<p>Cartesian acoustic image of a PVC square in the water column.</p>
Full article ">Figure 7
<p>Polar acoustic image of a PVC square in the water column.</p>
Full article ">Figure 8
<p>Class Activation Map applied to the CNN with a polar image of a PVC square as an input.</p>
Full article ">Figure 9
<p>SSD model inference in two polar acoustic images with multiple targets with the target detection confidence.</p>
Full article ">Figure 10
<p>YOLO8 model inference in polar acoustic images with multiple targets with the target detection confidence.</p>
Full article ">
16 pages, 4929 KiB  
Article
A Comparative Crash-Test of Manual and Semi-Automated Methods for Detecting Complex Submarine Morphologies
by Vasiliki Lioupa, Panagiotis Karsiotis, Riccardo Arosio, Thomas Hasiotis and Andrew J. Wheeler
Remote Sens. 2024, 16(21), 4093; https://doi.org/10.3390/rs16214093 - 2 Nov 2024
Viewed by 572
Abstract
Multibeam echosounders provide ideal data for the semi-automated seabed feature extraction and accurate morphometric measurements. In this study, bathymetric and raw backscatter data were initially used to manually delimit the reef morphologies found in an insular semi-enclosed gulf in the northern Aegean Sea [...] Read more.
Multibeam echosounders provide ideal data for the semi-automated seabed feature extraction and accurate morphometric measurements. In this study, bathymetric and raw backscatter data were initially used to manually delimit the reef morphologies found in an insular semi-enclosed gulf in the northern Aegean Sea (Gera Gulf, Lesvos Island, Greece). The complexity of this environment makes it an ideal area to “crash test” (test to the limit) and compare the results of the delineation methods. A large number of (more than 7000) small but prominent reefs were detected, which made manual mapping extremely time-consuming. Three semi-automated tools were also employed to map the reefs: the Benthic Terrain Modeler (BTM), Confined Morphologies Mapping (CoMMa), and eCognition Multiresolution Segmentation. BTM did not function properly with irregular reef footprints, but by modifying both the bathymetry and slope, the outcome was improved, producing accurate results that appeared to exceed the accuracy of manual mapping. CoMMa, a new GIS morphometric toolbox, was a “one-stop shop” that, besides generating satisfactory reef delineation results (i.e., detecting the same total reef area as the manual method), was also used to extract the morphometric characteristics of the polygons resulting from all the methods. Lastly, the Multiresolution Segmentation also gave satisfactory results with the highest precision. To compare the final maps with the distribution of the reefs, mapcurves were created to estimate the goodness-of-fit (GOF) with the Precision, Recall, and F1 Scores producing values higher than 0.78, suggesting a good detection accuracy for the semi-automated methods. The analysis reveals that the semi-automated methods provided more efficient results in comparison with the time-consuming manual mapping. Overall, for this case study, the modification of the bathymetry and slope enabled the results’ accuracy to be further enhanced. This study asserts that the use of semi-automated mapping is an effective method for delineating the geomorphometry of intricate relief and serves as a powerful tool for habitat mapping and decision-making. Full article
Show Figures

Figure 1

Figure 1
<p>The location of the study areas (Gera Gulf) in Greece and Lesvos Island (red circle).</p>
Full article ">Figure 2
<p>Sketch of overlapping maps (reefs) representing the values of GOF (A: reef area of the reference map, B: reef area of the second map, and C: overlapping area between reef A and B).</p>
Full article ">Figure 3
<p>Sketch showing polygons with (<b>a</b>) unique, (<b>b</b>) maximum, and (<b>c</b>) low GOF values (red polygon: manual mapping, green polygon: BTM-B4).</p>
Full article ">Figure 4
<p>Bathymetric map of the inner Gera Gulf with 2-m interval contours for a depth of 6 to 18 m.</p>
Full article ">Figure 5
<p>Polygons (reefs) produced using (<b>a</b>) manual, (<b>b</b>) BTM-B4, (<b>c</b>) BTM-B4S2 mapping, (<b>d</b>) CoMMa mapping, and (<b>e</b>) eCognition mapping.</p>
Full article ">Figure 6
<p>Bathymetric map of the Gera Gulf, with zoom-in areas (<b>i</b>–<b>iv</b>) showing the resulting polygons for all the methods (red polygons: manual, green: BTM-B4, blue: BTM-B4S2, magenta: CoMMa, yellow: eCognition).</p>
Full article ">Figure 7
<p>Categorization of the reef heights, created using (<b>a</b>) manual, (<b>b</b>) BTM-B4, (<b>c</b>) BTM-B4S2 mapping, (<b>d</b>) CoMMa mapping, and (<b>e</b>) eCognition mapping.</p>
Full article ">Figure 8
<p>Categorization of the reef area, created by (<b>a</b>) manual, (<b>b</b>) BTM-B4, (<b>c</b>) BTM-B4S2 mapping, (<b>d</b>) CoMMa mapping, and (<b>e</b>) eCognition mapping.</p>
Full article ">Figure 9
<p>Map of the reefs not detected using (<b>a</b>) manual BTM-B4 mapping, (<b>b</b>) using manual or BTM-B4S2 mapping, (<b>c</b>) using manual or CoMMa mapping, and (<b>d</b>) using manual or eCognition mapping.</p>
Full article ">Figure 10
<p>Mapcurve for GOF calculations using (<b>a</b>) manual as RF compared with BTM-B4, (<b>b</b>) manual as RF compared with BTM-B4S2, (<b>c</b>) manual as RF compared with CoMMa, and (<b>d</b>) manual as RF compared with eCognition.</p>
Full article ">
11 pages, 3465 KiB  
Article
Adaptive Beamforming for On-Orbit Satellite-Based ADS-B Based on FCNN
by Yiran Xiang, Songting Li and Lihu Chen
Sensors 2024, 24(21), 7065; https://doi.org/10.3390/s24217065 - 2 Nov 2024
Viewed by 603
Abstract
Digital multi-beam synthesis technology is generally used in the on-orbit satellite-based Automatic Dependent Surveillance–Broadcast (ADS-B) system. However, the probability of successfully detecting aircraft with uneven surface distribution is low. An adaptive digital beamforming method is proposed to improve the efficiency of aircraft detection [...] Read more.
Digital multi-beam synthesis technology is generally used in the on-orbit satellite-based Automatic Dependent Surveillance–Broadcast (ADS-B) system. However, the probability of successfully detecting aircraft with uneven surface distribution is low. An adaptive digital beamforming method is proposed to improve the efficiency of aircraft detection probability. The current method has the problem of long operation time and is not suitable for on-orbit operation. Therefore, this paper proposes an adaptive beamforming method for the ADS-B system based on a fully connected neural network (FCNN). The simulation results show that the calculation time of this method is about 2.6 s when more than 15,000 sets of data are inputted, which is 15–80% better than the existing methods. Its detection success probability is 10% higher than those of existing methods, and it has better robustness against large amounts of data. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>Principle of adaptive beamforming for on-orbit satellite-based ADS-B.</p>
Full article ">Figure 2
<p>Flow diagram of the on-orbit ADS-B adaptive beamforming method.</p>
Full article ">Figure 3
<p>LMS-based amplitude–phase mismatch calibration.</p>
Full article ">Figure 4
<p>The implementation process of an adaptive beamforming scheme.</p>
Full article ">Figure 5
<p>The result using the weighted network of the 6000th iteration.</p>
Full article ">Figure 6
<p>The uniform beam and the adaptively adjusted beam.</p>
Full article ">Figure 7
<p>Calculation time comparison chart.</p>
Full article ">Figure 8
<p>Detection probability of aircraft comparison chart.</p>
Full article ">
17 pages, 4060 KiB  
Article
Energy Efficient Multi-Active/Multi-Passive Antenna Arrays for Portable Access Points
by Muhammad Haroon Tariq, Shuai Zhang, Christos Masouros and Constantinos B. Papadias
Micromachines 2024, 15(11), 1351; https://doi.org/10.3390/mi15111351 - 1 Nov 2024
Viewed by 867
Abstract
This article is about better wireless network connectivity. The main goal is to provide wireless service to several use cases and scenarios that may not be adequately covered today. Some of the considered scenarios are home connectivity, street-based infrastructure, emergency situations, disaster areas, [...] Read more.
This article is about better wireless network connectivity. The main goal is to provide wireless service to several use cases and scenarios that may not be adequately covered today. Some of the considered scenarios are home connectivity, street-based infrastructure, emergency situations, disaster areas, special event areas, and remote areas that suffer from problematic/inadequate network and possibly power infrastructure. A target system that we consider for such scenarios is that of an energy-efficient self-backhauled base station (also called a “portable access point—PAP”) that is mounted on a drone to aid/expand the land-based network. For the wireless backhaul link of the PAP, as well as for the fronthaul of the street-mounted base station, we consider newly built multi-active/multi-passive parasitic antenna arrays (MAMPs). These antenna systems lead to increased range/signal strength with low hardware complexity and power needs. This is due to their reduced number of radio frequency chains, which decreases the cost and weight of the base station system. MAMPs can show a performance close to traditional multiple input/multiple output (MIMO) systems that use as many antenna elements as RF chains and to phased arrays. They can produce a directional beam in any desired direction with higher gain and narrow beamwidth by just tuning the load values of the parasitic elements. The MAMP is designed based on radiation conditions which were produced during the research to ensure that the radiation properties of the array were good. Full article
(This article belongs to the Special Issue Microwave Passive Components, 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Backhauling of drones and connectivity for emergency situations.</p>
Full article ">Figure 2
<p>Multi-Active/Multi-Passive parasitic antenna array geometry.</p>
Full article ">Figure 3
<p>MAMP geometry in MATLAB; blue colored crosses represent the parasitic elements and the red dots represent the active elements.</p>
Full article ">Figure 4
<p>The geometry of the MAMP array in CST; the cylinders are half-wave dipoles connected to passive components and grounded through via holes. The blue dots are the loads (passive components, capacitors, or inductors calculated from SBA), whereas the red dots are the feeding ports to feed the active elements.</p>
Full article ">Figure 5
<p>Geometry of the MAMP in CST (<b>a</b>) Top view of the structure (<b>b</b>) bottom view.</p>
Full article ">Figure 5 Cont.
<p>Geometry of the MAMP in CST (<b>a</b>) Top view of the structure (<b>b</b>) bottom view.</p>
Full article ">Figure 6
<p>Theoretical and simulated results of radiation patterns of MAMP array; comparison of the radiation patterns where magenta represents a ULA with 2 active elements, red is for the 5 ULA to be compared with the MAMP beam, blue is the calculated beam pattern of MAMP using SBA based on different combination of loads, and black is the radiation pattern of the MAMP array obtained from CST simulations.</p>
Full article ">Figure 7
<p>Three-dimensional Radiation pattern of MAMP array obtained from CST simulations.</p>
Full article ">Figure 8
<p>MAMP antenna array prototype.</p>
Full article ">Figure 9
<p>Setup for measuring the S-parameters using a vector network analyzer.</p>
Full article ">Figure 10
<p>Measuring radiation pattern in the anechoic chamber (<b>a</b>) antenna under test (<b>b</b>) setup for the measurement of radiation pattern.</p>
Full article ">Figure 11
<p>Measured and the simulated S-parameters of the MAMP antenna array. The red curve represents the simulated while blue represents the measured S-parameters of the structure.</p>
Full article ">Figure 12
<p>Measured and simulated polar radiation patterns of the MAMP antenna array.</p>
Full article ">Figure 13
<p>Measured and simulated gain of the MAMP antenna array.</p>
Full article ">
13 pages, 6903 KiB  
Article
Inverse-Designed Ultra-Compact Passive Phase Shifters for High-Performance Beam Steering
by Tianyang Fu, Mengfan Chu, Ke Jin, Honghan Sha, Xin Yan, Xueguang Yuan, Yang’an Zhang, Jinnan Zhang and Xia Zhang
Sensors 2024, 24(21), 7055; https://doi.org/10.3390/s24217055 - 1 Nov 2024
Viewed by 576
Abstract
Ultra-compact passive phase shifters are inversely designed by the multi-objective particle swarm optimization algorithm. The wavelength-dependent phase difference between two output beams originates from the different distances of the input light passing through the 4 μm × 3.2 μm rectangular waveguide with random-distributed [...] Read more.
Ultra-compact passive phase shifters are inversely designed by the multi-objective particle swarm optimization algorithm. The wavelength-dependent phase difference between two output beams originates from the different distances of the input light passing through the 4 μm × 3.2 μm rectangular waveguide with random-distributed air-hole arrays. As the wavelength changes from 1535 to 1565 nm, a phase difference tuning range of 6.26 rad and 6.95 rad is obtained for TE and TM modes, respectively. Compared with the array waveguide grating counterpart, the phase shifters exhibit higher transmission with a much smaller footprint. By combining the inverse-designed phase shifter and random-grating emitter together, integrated beam-steering structures are built, which show a large scanning range of ±25.47° and ±27.85° in the lateral direction for TE and TM mode, respectively. This work may pave the way for the development of ultra-compact high-performance optical phased array LiDARs. Full article
(This article belongs to the Special Issue Recent Advances in LiDAR Sensor)
Show Figures

Figure 1

Figure 1
<p>MOPSO flowchart.</p>
Full article ">Figure 2
<p>Structures of the passive phase shifters. (<b>a</b>) 3−D structural schematic of the TE mode device. (<b>b</b>) Top view of the TE mode device. (<b>c</b>) 3−D l structural schematic of the TM mode device. (<b>d</b>) Top view of the TM mode device. (<b>e</b>) Top view of a pixel and side view of the input and output waveguides.</p>
Full article ">Figure 3
<p>Field distribution of phase shifters for (<b>a</b>) TE mode and (<b>b</b>)TM mode.</p>
Full article ">Figure 4
<p>Phase maps of (<b>a</b>) Port1 of TE mode device. (<b>b</b>) Port2 of TE mode device. (<b>c</b>) Port1 of TM mode device. (<b>d</b>) Port2 of TM mode device.</p>
Full article ">Figure 5
<p>Transmission of phase shifters. (<b>a</b>) TE mode. (<b>b</b>) TM mode.</p>
Full article ">Figure 6
<p>AWG-based phase shifter structure.</p>
Full article ">Figure 7
<p>Phase error of inversely designed phase shifters. (<b>a</b>) Port1 of TE mode device. (<b>b</b>) Port2 of TE mode device. (<b>c</b>) Port1 of TM mode device. (<b>d</b>) Port2 of TM mode device.</p>
Full article ">Figure 8
<p>Transmission of inverse-designed phase shifter and AWG counterpart. (<b>a</b>) TE mode. (<b>b</b>) TM mode.</p>
Full article ">Figure 9
<p>Inverse-designed integrated beam-steering structure for TE mode. (<b>a</b>) Three-dimensional scheme of the whole structure. (<b>b</b>) Top view of the phase shifter including Si waveguide structure and air holes. (<b>c</b>) Side view of the emitter.</p>
Full article ">Figure 10
<p>Inverse-designed integrated beam-steering structure for TM mode. (<b>a</b>) Three-dimensional scheme of the whole structure. (<b>b</b>) Top view of the phase shifter including Si waveguide structure and air holes. (<b>c</b>) Side view of the emitter.</p>
Full article ">Figure 11
<p>Far−field position of the inverse-designed integrated beam-steering structure. (<b>a</b>) TE mode. (<b>b</b>) TM mode.</p>
Full article ">
Back to TopTop