Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 23, April-1
Previous Issue
Volume 23, March-1
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 23, Issue 6 (March-2 2023) – 490 articles

Cover Story (view full-size image): Haptic perception is an essential component of the sensory information available to surgeons, but the direct tactile assessment of textures is impeded in minimally invasive surgery. However, surgical instruments provide limited and distorted haptic information that requires the surgeon’s interpretation. It is possible to acquire and analyse parts of this information using a vibration-sensing setup attached to the instrument. The presented study investigates this approach based on the vibro-acoustic signals acquired during robot-assisted palpation of different materials. A continuous wavelet transformation-based processing strategy shows that material-specific signatures in the time–frequency domain can be used to classify the materials. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
21 pages, 2586 KiB  
Article
Enhancing Microservices Security with Token-Based Access Control Method
by Algimantas Venčkauskas, Donatas Kukta, Šarūnas Grigaliūnas and Rasa Brūzgienė
Sensors 2023, 23(6), 3363; https://doi.org/10.3390/s23063363 - 22 Mar 2023
Cited by 2 | Viewed by 5628
Abstract
Microservices are compact, independent services that work together with other microservices to support a single application function. Organizations may quickly deliver high-quality applications using the effective design pattern of the application function. Microservices allow for the alteration of one service in an application [...] Read more.
Microservices are compact, independent services that work together with other microservices to support a single application function. Organizations may quickly deliver high-quality applications using the effective design pattern of the application function. Microservices allow for the alteration of one service in an application without affecting the other services. Containers and serverless functions, two cloud-native technologies, are frequently used to create microservices applications. A distributed, multi-component program has a number of advantages, but it also introduces new security risks that are not present in more conventional monolithic applications. The objective is to propose a method for access control that ensures the enhanced security of microservices. The proposed method was experimentally tested and validated in comparison to the centralized and decentralized architectures of the microservices. The obtained results showed that the proposed method enhanced the security of decentralized microservices by distributing the access control responsibility across multiple microservices within the external authentication and internal authorization processes. This allows for easy management of permissions between microservices and can help prevent unauthorized access to sensitive data and resources, as well as reduce the risk of attacks on microservices. Full article
Show Figures

Figure 1

Figure 1
<p>Concept of the JWT-driven access control in microservices.</p>
Full article ">Figure 2
<p>UML sequence diagram of the generalized process for centralized access control in microservices.</p>
Full article ">Figure 3
<p>Diagram of the components for centralized access control in microservices.</p>
Full article ">Figure 4
<p>Diagram of the components for the proposed decentralized access control in microservices.</p>
Full article ">Figure 5
<p>Structure of external JWT tokens.</p>
Full article ">Figure 6
<p>Structure of internal JWT token.</p>
Full article ">Figure 7
<p>Usage of CPU.</p>
Full article ">Figure 8
<p>Usage of RAM.</p>
Full article ">Figure 9
<p>Processing of requests.</p>
Full article ">Figure 10
<p>Weighting factors.</p>
Full article ">
17 pages, 8965 KiB  
Article
Timepix3: Compensation of Thermal Distortion of Energy Measurement
by Martin Urban, Ondrej Nentvich, Lukas Marek, David Hladik, Rene Hudec and Ladislav Sieger
Sensors 2023, 23(6), 3362; https://doi.org/10.3390/s23063362 - 22 Mar 2023
Viewed by 2007
Abstract
The Timepix3 is a hybrid pixellated radiation detector consisting of a 256 px × 256 px radiation-sensitive matrix. Research has shown that it is susceptible to energy spectrum distortion due to temperature variations. This can lead to a relative measurement error of up [...] Read more.
The Timepix3 is a hybrid pixellated radiation detector consisting of a 256 px × 256 px radiation-sensitive matrix. Research has shown that it is susceptible to energy spectrum distortion due to temperature variations. This can lead to a relative measurement error of up to 35% in the tested temperature range of 10 °C to 70 °C. To overcome this issue, this study proposes a complex compensation method to reduce the error to less than 1%. The compensation method was tested with different radiation sources, focusing on energy peaks up to 100 keV. The results of the study showed that a general model for temperature distortion compensation could be established, where the error in the X-ray fluorescence spectrum of Lead (74.97 keV) was reduced from 22% to less than 2% for 60 °C after the correction was applied. The validity of the model was also verified at temperatures below 0 °C, where the relative measurement error for the Tin peak (25.27 keV) was reduced from 11.4% to 2.1% at 40 °C. The results of this study demonstrate the effectiveness of the proposed compensation method and models in significantly improving the accuracy of energy measurements. This has implications for various fields of research and industry that require accurate radiation energy measurements and cannot afford to use power for cooling or temperature stabilisation of the detector. Full article
(This article belongs to the Special Issue Sensing for Space Applications (Volume II))
Show Figures

Figure 1

Figure 1
<p>Illustration and description of the MiniPIX Timepix3 detector without a protective case.</p>
Full article ">Figure 2
<p>Schematic arrangement for measuring the energy spectrum of radiation. (<b>a</b>) X-ray fluorescence; (<b>b</b>) Radionuclides.</p>
Full article ">Figure 3
<p>Part of the measured energy spectrum of the radiation produced by the X-ray fluorescence of a Tantalum target with a 500 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math><math display="inline"><semantics> <mi mathvariant="normal">m</mi> </semantics></math> Si Timepix3 detector without temperature stabilisation. Measured data (blue markers) together with the <math display="inline"><semantics> <mrow> <msub> <mi>G</mi> <mi>e</mi> </msub> <mrow> <mo>(</mo> <mi>E</mi> <mo>,</mo> <msub> <mi>A</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>A</mi> <mn>2</mn> </msub> <mo>,</mo> <mi>μ</mi> <mo>,</mo> <mi>σ</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> function (combination of Gaussian and complementary error function—green line) and with the result of the final fit of the <math display="inline"><semantics> <mrow> <msub> <mi>G</mi> <mi>a</mi> </msub> <mrow> <mo>(</mo> <mi>E</mi> <mo>,</mo> <msub> <mi>A</mi> <mn>1</mn> </msub> <mo>,</mo> <mi>μ</mi> <mo>,</mo> <mi>σ</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> function (orange line) to the Ta-<math display="inline"><semantics> <msub> <mi>K</mi> <mrow> <mi>α</mi> <mn>1</mn> </mrow> </msub> </semantics></math> line. The resulting position of the <math display="inline"><semantics> <msub> <mi>μ</mi> <msup> <mi>χ</mi> <mn>2</mn> </msup> </msub> </semantics></math> of the evaluated peaks determined by <math display="inline"><semantics> <msup> <mi>χ</mi> <mn>2</mn> </msup> </semantics></math> minimisation is <math display="inline"><semantics> <mrow> <mn>57.072</mn> </mrow> </semantics></math> keV (purple dashed dotted line). Considering the statistically significant number of observed events, the maximum size of statistical uncertainties was determined to be &lt;80 counts in the range shown, the individual error bars are not plotted to the measured points for clarity.</p>
Full article ">Figure 4
<p>Influence of temperature change of the MiniPIX Timepix3 detectors equipped with 500 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math><math display="inline"><semantics> <mi mathvariant="normal">m</mi> </semantics></math> Si sensor on the obtained energetic spectrum measured by Time-over-Threshold in data-driven mode. The plotted data represent the mean values over the tested detectors, and the error bars indicate their minimum and maximum value: (<b>a</b>) Absolute measurement accuracy; (<b>b</b>) Relative error of measurement.</p>
Full article ">Figure 5
<p>Model of temperature dependence distortion of measurement accuracy of radiation energy spectrum (Time-over-Threshold in Data-driven mode) with Timepix3 detector equipped with 500 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math><math display="inline"><semantics> <mi mathvariant="normal">m</mi> </semantics></math> Si sensor. The error bars indicate the inaccuracy of a given parameter obtained by the fitting function: (<b>a</b>) Temperature dependence; (<b>b</b>) Energy dependence.</p>
Full article ">Figure 6
<p>The change in absolute measurement accuracy and relative measurement error of the tested sources after applying the proposed correction. The plotted data represent the mean values over the tested detectors, and the error bars indicate their minimum and maximum value: (<b>a</b>) Absolute measurement accuracy; (<b>b</b>) Relative error of measurement.</p>
Full article ">Figure 7
<p>The comparison of the energy spectrum obtained by X-ray fluorescence with a Lead target and different compensation for thermal distortion. The measurements were performed with the detector <span class="html-italic">E</span> at three different sensor temperatures (30 °C, 40 °C and 50 °C): (<b>a</b>) Without compensation; (<b>b</b>) Individual compensation model; (<b>c</b>) Generalised compensation model.</p>
Full article ">Figure 8
<p>Timepix3 detector arrangement for thermal testing of verification and extrapolation of the compensation model in the vacuum chamber: (<b>a</b>) arrangement configuration for X-ray fluorescence measurement; (<b>b</b>) details of the detector placement on the Peltier module with feedback thermometer.</p>
Full article ">Figure 9
<p>Model of temperature compensation of energy measurement (X-ray fluorescence with Tin target and detector in Time-over-Threshold mode of measurement and data-driven readout mode) distortion using Timepix3 detector with 1000 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math><math display="inline"><semantics> <mi mathvariant="normal">m</mi> </semantics></math> Silicon sensor in the temperature range from −40 °C to 60 °C. Extrapolated model based on data measured in temperatures above 0 °C is shown as well.</p>
Full article ">Figure 10
<p>The comparison of the energy spectrum obtained by X-ray fluorescence with a Tin target and different compensation for thermal distortion. The measurement results of a detector with a 1000 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math><math display="inline"><semantics> <mi mathvariant="normal">m</mi> </semantics></math> Si sensor placed in a vacuum for different sensor temperatures (−40 °C, 0 °C and 60 °C) are shown: (<b>a</b>) Without compensation; (<b>b</b>) Individual compensation model; (<b>c</b>) Extrapolated compensation model.</p>
Full article ">
33 pages, 16682 KiB  
Article
A Two-Stage Automatic Color Thresholding Technique
by Shamna Pootheri, Daniel Ellam, Thomas Grübl and Yang Liu
Sensors 2023, 23(6), 3361; https://doi.org/10.3390/s23063361 - 22 Mar 2023
Viewed by 4304
Abstract
Thresholding is a prerequisite for many computer vision algorithms. By suppressing the background in an image, one can remove unnecessary information and shift one’s focus to the object of inspection. We propose a two-stage histogram-based background suppression technique based on the chromaticity of [...] Read more.
Thresholding is a prerequisite for many computer vision algorithms. By suppressing the background in an image, one can remove unnecessary information and shift one’s focus to the object of inspection. We propose a two-stage histogram-based background suppression technique based on the chromaticity of the image pixels. The method is unsupervised, fully automated, and does not need any training or ground-truth data. The performance of the proposed method was evaluated using a printed circuit assembly (PCA) board dataset and the University of Waterloo skin cancer dataset. Accurately performing background suppression in PCA boards facilitates the inspection of digital images with small objects of interest, such as text or microcontrollers on a PCA board. The segmentation of skin cancer lesions will help doctors to automate skin cancer detection. The results showed a clear and robust background–foreground separation across various sample images under different camera or lighting conditions, which the naked implementation of existing state-of-the-art thresholding methods could not achieve. Full article
(This article belongs to the Special Issue Machine Learning in Robust Object Detection and Tracking)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Overall sequence of the two-stage global–local thresholding technique. Gray-shaded boxes indicate the key contributions of the proposed method. (<b>b</b>) Schematic overview of the global–local thresholding stages.</p>
Full article ">Figure 2
<p>(<b>a</b>) Input color image. (<b>b</b>) Hue and saturation representation for the HSV format in OpenCV. The hue ranged from 0 to 180, the saturation ranged from 0 to 255, and the value was fixed at 255 for this representation. (<b>c</b>) The probability mass function of the hue component (symmetric unimodal histogram) of the input image. The nominated hue ranges were 18 to 35, 72 to 93, and 116 to 130, where the area under the curve and gradient were greater than a cut-off value within a window. The shaded region (72 to 93) shows the optimal global hue range selected by the proposed algorithm according to the maximum continuous hue range heuristic, including the peak hue at position 83. (<b>d</b>) The probability mass function of the saturation component of the input image (skewed unimodal histogram). The nominated saturation ranges were 30 to 55, 75 to 90, and 137 to 255, where the area under the curve and gradient were greater than a cut-off value within a window. The shaded region (137 to 255) shows the optimal global saturation range selected by the maximum continuous saturation range heuristic. The small black box indicates the window, and the red arrow represents the gradient value in the corresponding window.</p>
Full article ">Figure 3
<p>Automatic detection of local windows to refine the background. (<b>a</b>) Input color image. (<b>b</b>) Global thresholded image. Global hue range: (75, 90). Global saturation range: (111, 255). (<b>c</b>) Relevant blobs and selected regions before applying local thresholding (<b>d</b>). Relevant corresponding image regions. (<b>e</b>) Refined background after global and local thresholding. Local hue and saturation ranges of the two selected subregions: (75, 80), (102, 234) and (70, 95), (101, 252), respectively.</p>
Full article ">Figure 3 Cont.
<p>Automatic detection of local windows to refine the background. (<b>a</b>) Input color image. (<b>b</b>) Global thresholded image. Global hue range: (75, 90). Global saturation range: (111, 255). (<b>c</b>) Relevant blobs and selected regions before applying local thresholding (<b>d</b>). Relevant corresponding image regions. (<b>e</b>) Refined background after global and local thresholding. Local hue and saturation ranges of the two selected subregions: (75, 80), (102, 234) and (70, 95), (101, 252), respectively.</p>
Full article ">Figure 4
<p>(<b>a</b>–<b>d</b>) Sample images of the PCA dataset and (<b>e</b>–<b>h</b>) corresponding ground truth. (<b>i</b>–<b>l</b>) Sample images of skin cancer dataset and (<b>m</b>–<b>p</b>) corresponding ground truth.</p>
Full article ">Figure 5
<p>Sample thresholding results using a skin lesion image: (<b>a</b>) input image, (<b>b</b>) ground truth, (<b>c</b>) Otsu [<a href="#B11-sensors-23-03361" class="html-bibr">11</a>], (<b>d</b>) Kapur et al. [<a href="#B12-sensors-23-03361" class="html-bibr">12</a>], (<b>e</b>) Niblack [<a href="#B23-sensors-23-03361" class="html-bibr">23</a>], (<b>f</b>) P-tile [<a href="#B27-sensors-23-03361" class="html-bibr">27</a>], (<b>g</b>) two-peak [<a href="#B27-sensors-23-03361" class="html-bibr">27</a>], (<b>h</b>) local contrast [<a href="#B27-sensors-23-03361" class="html-bibr">27</a>], (<b>i</b>) Sauvola et al. [<a href="#B28-sensors-23-03361" class="html-bibr">28</a>], (<b>j</b>) Wolf and Jolion [<a href="#B36-sensors-23-03361" class="html-bibr">36</a>], (<b>k</b>) Feng and Tan [<a href="#B37-sensors-23-03361" class="html-bibr">37</a>], (<b>l</b>) Bradley and Roth [<a href="#B5-sensors-23-03361" class="html-bibr">5</a>], (<b>m</b>) Singh et al. [<a href="#B38-sensors-23-03361" class="html-bibr">38</a>], (<b>n</b>) DTP-NET [<a href="#B46-sensors-23-03361" class="html-bibr">46</a>] pre-trained model, (<b>o</b>) U-Net [<a href="#B44-sensors-23-03361" class="html-bibr">44</a>] with Resnet-152 as backbone, (<b>p</b>) proposed method.</p>
Full article ">Figure 6
<p>Sample thresholding results using a PCA board image: (<b>a</b>) input image, (<b>b</b>) ground truth, (<b>c</b>) Otsu [<a href="#B11-sensors-23-03361" class="html-bibr">11</a>], (<b>d</b>) Kapur et al.[<a href="#B12-sensors-23-03361" class="html-bibr">12</a>], (<b>e</b>) Niblack [<a href="#B23-sensors-23-03361" class="html-bibr">23</a>], (<b>f</b>) P-tile [<a href="#B27-sensors-23-03361" class="html-bibr">27</a>], (<b>g</b>) two-peak [<a href="#B27-sensors-23-03361" class="html-bibr">27</a>], (<b>h</b>) local contrast [<a href="#B27-sensors-23-03361" class="html-bibr">27</a>], (<b>i</b>) Sauvola et al. [<a href="#B28-sensors-23-03361" class="html-bibr">28</a>], (<b>j</b>) Wolf and Jolion [<a href="#B36-sensors-23-03361" class="html-bibr">36</a>], (<b>k</b>) Feng and Tan [<a href="#B37-sensors-23-03361" class="html-bibr">37</a>], (<b>l</b>) Bradley and Roth [<a href="#B5-sensors-23-03361" class="html-bibr">5</a>], (<b>m</b>) Singh et al. [<a href="#B38-sensors-23-03361" class="html-bibr">38</a>], (<b>n</b>) DTP-NET [<a href="#B46-sensors-23-03361" class="html-bibr">46</a>] pre-trained model, (<b>o</b>) U-Net [<a href="#B44-sensors-23-03361" class="html-bibr">44</a>] with Resnet-152 as backbone, (<b>p</b>) proposed method.</p>
Full article ">Figure 7
<p>(<b>a</b>–<b>d</b>) Sample images of the PCA board dataset with varying background colors and (<b>e</b>–<b>h</b>) corresponding ground truth. (<b>i</b>–<b>l</b>) Results using Singh et al. [<a href="#B38-sensors-23-03361" class="html-bibr">38</a>], (<b>m</b>–<b>p</b>) DTP-Net fine-tuned model [<a href="#B46-sensors-23-03361" class="html-bibr">46</a>], (<b>q</b>–<b>t</b>) U-Net with Resnet-152 as backbone, and (<b>u</b>–<b>x</b>) proposed method.</p>
Full article ">Figure 8
<p>(<b>a</b>–<b>e</b>) Sample images of the PCA board dataset with varying image intensity and (<b>f</b>–<b>j</b>) corresponding ground truth. (<b>k</b>–<b>o</b>) Results using Singh et al. [<a href="#B38-sensors-23-03361" class="html-bibr">38</a>], (<b>p</b>–<b>t</b>) DTP-Net fine-tuned model [<a href="#B46-sensors-23-03361" class="html-bibr">46</a>], (<b>u</b>–<b>y</b>) U-Net (Resnet-152) fine-tuned model, and (<b>z</b>–<b>ad</b>) proposed method.</p>
Full article ">Figure 9
<p>Global and local thresholding results with varying image resolution. Left column (<b>a</b>,<b>d</b>,<b>g</b>,<b>j</b>): input PCA board image. Centre column (<b>b</b>,<b>e</b>,<b>h</b>,<b>k</b>): global thresholded image. Right column (<b>c</b>,<b>f</b>,<b>i</b>,<b>l</b>): local thresholded image. Image resolutions: (<b>a</b>) 41.9 MP, (<b>d</b>) 10.1 MP, (<b>g</b>) 1.9 MP, (<b>j</b>) 0.6 MP.</p>
Full article ">Figure 10
<p>Different sample images and the corresponding globally and locally thresholded outputs: (<b>a</b>,<b>b</b>) screws on a wooden table; (<b>c</b>,<b>d</b>) needles on a red background; (<b>e</b>,<b>f</b>) a drone image of container ships; (<b>g</b>,<b>h</b>) a photo of an old text document; (<b>i</b>–<b>l</b>) text with noise in the background from the DIBCO dataset [<a href="#B75-sensors-23-03361" class="html-bibr">75</a>].</p>
Full article ">Figure 10 Cont.
<p>Different sample images and the corresponding globally and locally thresholded outputs: (<b>a</b>,<b>b</b>) screws on a wooden table; (<b>c</b>,<b>d</b>) needles on a red background; (<b>e</b>,<b>f</b>) a drone image of container ships; (<b>g</b>,<b>h</b>) a photo of an old text document; (<b>i</b>–<b>l</b>) text with noise in the background from the DIBCO dataset [<a href="#B75-sensors-23-03361" class="html-bibr">75</a>].</p>
Full article ">Figure 11
<p>Thresholding results of images with simlarly colored foreground or background regions. Left column: input image. Centre column: ground truth. Right column: image thresholded by the proposed method. The red-colored component D11 in (<b>a</b>) is misclassified as the background in (<b>c</b>), based on the ground truth (<b>b</b>). Ink stains in the input text image (<b>d</b>) are misclassified as foreground in (<b>f</b>). Images (<b>d</b>–<b>f</b>) were taken from the DIBCO database [<a href="#B75-sensors-23-03361" class="html-bibr">75</a>].</p>
Full article ">Figure A1
<p>Multiple comparisons test: pair-wise comparison of different methods based on the PSNR scores. The central red line indicates the median PSNR value of each method, and the blue box is the interquartile range (IQR), indicating the 25th and 75th percentiles of the PSNR dataset. The dashed black line represents the whiskers, extending to the minimum and maximum data points. Statistical outliers are marked using the red ’+’ symbol.</p>
Full article ">Figure A2
<p>The input image used to test different configurations of the parameters.</p>
Full article ">
17 pages, 43226 KiB  
Article
Fabrication of Ultra-Sharp Tips by Dynamic Chemical Etching Process for Scanning Near-Field Microwave Microscopy
by C. H. Joseph, Giovanni Capoccia, Andrea Lucibello, Emanuela Proietti, Giovanni Maria Sardi, Giancarlo Bartolucci and Romolo Marcelli
Sensors 2023, 23(6), 3360; https://doi.org/10.3390/s23063360 - 22 Mar 2023
Viewed by 1915
Abstract
This work details an effective dynamic chemical etching technique to fabricate ultra-sharp tips for Scanning Near-Field Microwave Microscopy (SNMM). The protruded cylindrical part of the inner conductor in a commercial SMA (Sub Miniature A) coaxial connector is tapered by a dynamic chemical etching [...] Read more.
This work details an effective dynamic chemical etching technique to fabricate ultra-sharp tips for Scanning Near-Field Microwave Microscopy (SNMM). The protruded cylindrical part of the inner conductor in a commercial SMA (Sub Miniature A) coaxial connector is tapered by a dynamic chemical etching process using ferric chloride. The technique is optimized to fabricate ultra-sharp probe tips with controllable shapes and tapered down to have a radius of tip apex around ∼1 μm. The detailed optimization facilitated the fabrication of reproducible high-quality probes suitable for non-contact SNMM operation. A simple analytical model is also presented to better describe the dynamics of the tip formation. The near-field characteristics of the tips are evaluated by finite element method (FEM) based electromagnetic simulations and the performance of the probes has been validated experimentally by means of imaging a metal-dielectric sample using the in-house scanning near-field microwave microscopy system. Full article
(This article belongs to the Special Issue Microwave Techniques for Spectroscopy and Imaging Applications)
Show Figures

Figure 1

Figure 1
<p>Experimental setup. (<b>a</b>) Geometry of the SMA flange connector, all the dimensions are in mm; (<b>b</b>) schematic of the experimental arrangement (1. Mechanical frame, 2. Solution container, 3. Movement assembly, 4. Moving stage, 5. SMA Male Plug, 6. SMA female connector, 7. Stepper motor); (<b>c</b>) photograph of the experimental arrangement.</p>
Full article ">Figure 2
<p>Schematic of the flange connector to be etched (not in scale).</p>
Full article ">Figure 3
<p>Schematic diagram of the dynamic etching process.</p>
Full article ">Figure 4
<p>The final shape of the etched pin is qualitatively shown accounting for the two possible results: (<b>a</b>) meniscus effects are not present, (<b>b</b>) meniscus effects considered.</p>
Full article ">Figure 5
<p>Reshaping of the pin during the etching process.</p>
Full article ">Figure 6
<p>Reshaping of the pin during the etching process: (<b>a</b>) starting shape, (<b>b</b>) after 300 cycles, (<b>c</b>) after 500 cycles, (<b>d</b>) after 700 cycles, (<b>e</b>) after 900 cycles, (<b>f</b>) final shape after 1000 cycles.</p>
Full article ">Figure 7
<p>Photograph of the prepared solutions with different concentrations.</p>
Full article ">Figure 8
<p>Etching process results with different concentration solutions: (<b>a</b>) 20% solution, (<b>b</b>) 25 % solution, (<b>c</b>) 30% solution, (<b>d</b>) 40% solution.</p>
Full article ">Figure 9
<p>SEM micrograph images of the fabricated tips. (<b>a</b>) 2 mm/s velocity with 25% concentrated solution, (<b>b</b>) Top View of the tip prepared with a velocity of 2 mm/s with 25% concentrated solution, (<b>c</b>) 4 mm/s with 25% concentration, (<b>d</b>) 2 mm/s with 20% concentration.</p>
Full article ">Figure 10
<p>Dynamic etching of the pin starting from the initial value of the diameter (250 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>m) down to the micrometric size. The complete etching is obtained after 3000 s = 50 min (aprox.).</p>
Full article ">Figure 11
<p>Simulated electric field map of the probes. (<b>a</b>) cylindrical pin; (<b>b</b>) flat edge tip; (<b>c</b>) blunt edge tip. (<b>d</b>) Normalized E-field profile of the cylindrical pin taken at two different distances from the pin. The field profiles of the flat and blunt edge conical tapered tips were taken at a vertical distance of (<b>e</b>) 10 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>m and (<b>f</b>) 1 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>m.</p>
Full article ">Figure 12
<p>Imaging with the SNMM of gold-patterned shapes obtained on an alumina substrate by photolithography. In (<b>a</b>), the image has been derived by an ordinary pin of a microwave flange connector; in (<b>b</b>,<b>c</b>), amplitude and phase have been measured using the signal reflected by the sample back to an etched micrometric tip.</p>
Full article ">
18 pages, 3686 KiB  
Article
Photoplethysmography Driven Hypertension Identification: A Pilot Study
by Liangwen Yan, Mingsen Wei, Sijung Hu and Bo Sheng
Sensors 2023, 23(6), 3359; https://doi.org/10.3390/s23063359 - 22 Mar 2023
Cited by 4 | Viewed by 2019
Abstract
To prevent and diagnose hypertension early, there has been a growing demand to identify its states that align with patients. This pilot study aims to research how a non-invasive method using photoplethysmographic (PPG) signals works together with deep learning algorithms. A portable PPG [...] Read more.
To prevent and diagnose hypertension early, there has been a growing demand to identify its states that align with patients. This pilot study aims to research how a non-invasive method using photoplethysmographic (PPG) signals works together with deep learning algorithms. A portable PPG acquisition device (Max30101 photonic sensor) was utilized to (1) capture PPG signals and (2) wirelessly transmit data sets. In contrast to traditional feature engineering machine learning classification schemes, this study preprocessed raw data and applied a deep learning algorithm (LSTM-Attention) directly to extract deeper correlations between these raw datasets. The Long Short-Term Memory (LSTM) model underlying a gate mechanism and memory unit enables it to handle long sequence data more effectively, avoiding gradient disappearance and possessing the ability to solve long-term dependencies. To enhance the correlation between distant sampling points, an attention mechanism was introduced to capture more data change features than a separate LSTM model. A protocol with 15 healthy volunteers and 15 hypertension patients was implemented to obtain these datasets. The processed result demonstrates that the proposed model could present satisfactory performance (accuracy: 0.991; precision: 0.989; recall: 0.993; F1-score: 0.991). The model we proposed also demonstrated superior performance compared to related studies. The outcome indicates the proposed method could effectively diagnose and identify hypertension; thus, a paradigm to cost-effectively screen hypertension could rapidly be established using wearable smart devices. Full article
(This article belongs to the Topic Machine Learning and Biomedical Sensors)
Show Figures

Figure 1

Figure 1
<p>A flowchart of the PPG data acquisition system.</p>
Full article ">Figure 2
<p>PPG data acquisition device.</p>
Full article ">Figure 3
<p>PPG signal collection experiment.</p>
Full article ">Figure 4
<p>Data processing flow chart.</p>
Full article ">Figure 5
<p>(<b>a</b>) Original signal and its spectrum; (<b>b</b>) Signal and spectrum after filtering.</p>
Full article ">Figure 6
<p>Cubic spline difference removes the baseline deviation of the PPG trough.</p>
Full article ">Figure 7
<p>Structure diagram of LSTM-Attention.</p>
Full article ">Figure 8
<p>Schematical diagram of LSTM.</p>
Full article ">Figure 9
<p>Abstract Encoder-Decoder framework.</p>
Full article ">Figure 10
<p>Schematic diagram of attention mechanism structure.</p>
Full article ">Figure 11
<p>(<b>a</b>) Confusion matrix of the test set; (<b>b</b>) Confusion matrix of the robustness verification experiment; (<b>c</b>) Confusion matrix of the reliability verification experiment.</p>
Full article ">Figure 12
<p>(<b>a</b>) Accuracy curve; (<b>b</b>) Loss curve.</p>
Full article ">
22 pages, 8315 KiB  
Article
Real-Time Fire Smoke Detection Method Combining a Self-Attention Mechanism and Radial Multi-Scale Feature Connection
by Chuan Jin, Anqi Zheng, Zhaoying Wu and Changqing Tong
Sensors 2023, 23(6), 3358; https://doi.org/10.3390/s23063358 - 22 Mar 2023
Cited by 7 | Viewed by 3265
Abstract
Fire remains a pressing issue that requires urgent attention. Due to its uncontrollable and unpredictable nature, it can easily trigger chain reactions and increase the difficulty of extinguishing, posing a significant threat to people’s lives and property. The effectiveness of traditional photoelectric- or [...] Read more.
Fire remains a pressing issue that requires urgent attention. Due to its uncontrollable and unpredictable nature, it can easily trigger chain reactions and increase the difficulty of extinguishing, posing a significant threat to people’s lives and property. The effectiveness of traditional photoelectric- or ionization-based detectors is inhibited when detecting fire smoke due to the variable shape, characteristics, and scale of the detected objects and the small size of the fire source in the early stages. Additionally, the uneven distribution of fire and smoke and the complexity and variety of the surroundings in which they occur contribute to inconspicuous pixel-level-based feature information, making identification difficult. We propose a real-time fire smoke detection algorithm based on multi-scale feature information and an attention mechanism. Firstly, the feature information layers extracted from the network are fused into a radial connection to enhance the semantic and location information of the features. Secondly, to address the challenge of recognizing harsh fire sources, we designed a permutation self-attention mechanism to concentrate on features in channel and spatial directions to gather contextual information as accurately as possible. Thirdly, we constructed a new feature extraction module to increase the detection efficiency of the network while retaining feature information. Finally, we propose a cross-grid sample matching approach and a weighted decay loss function to handle the issue of imbalanced samples. Our model achieves the best detection results compared to standard detection methods using a handcrafted fire smoke detection dataset, with APval reaching 62.5%, APSval reaching 58.5%, and FPS reaching 113.6. Full article
(This article belongs to the Topic Applications in Image Analysis and Pattern Recognition)
Show Figures

Figure 1

Figure 1
<p>Two types of pyramid structures commonly used in tasks involving the detection of objects in realistic scenes.</p>
Full article ">Figure 2
<p>This diagram illustrates the framework of a model for three main forms of soft attention processes. Sub-figures (<b>a</b>,<b>b</b>) represent channel attention, (<b>c</b>,<b>d</b>) represent spatial attention, while (<b>e</b>,<b>f</b>) represent self-attention.</p>
Full article ">Figure 3
<p>The structure of the improved fire smoke detection model based on YOLOv7-X.</p>
Full article ">Figure 4
<p>Radially connected FPN-PAN structure (RC FPN-PAN) operating by stitching the feature information from the initial layer to the final prediction layer through the residual structure of the dashed line.</p>
Full article ">Figure 5
<p>Diagram illustrating the CDPB module’s structure, utilizing DWConv and PWConv structures in place of the original convolutional layers (We have chosen kernel_size equal to three, and the CDPB is about one-ninth the computational effort of full convolution).</p>
Full article ">Figure 6
<p>A permutation self-attentive mechanism is utilized to process both channel and spatial feature information.</p>
Full article ">Figure 7
<p>Schematic diagram of a partial dataset, with (<b>a</b>) real target objects to be detected and (<b>b</b>) negative samples that are susceptible to interference.</p>
Full article ">Figure 8
<p>Comparison of results with one-stage object detection methods in terms of inference speed.</p>
Full article ">Figure 9
<p>Comparison of heatmaps results with one-stage object detectors. The deeper the colour of the area the stronger the attention.</p>
Full article ">Figure 10
<p>Comparison of results with standard one-stage object detectors.</p>
Full article ">
17 pages, 6127 KiB  
Article
Fast Distributed Model Predictive Control Method for Active Suspension Systems
by Niaona Zhang, Sheng Yang, Guangyi Wu, Haitao Ding, Zhe Zhang and Konghui Guo
Sensors 2023, 23(6), 3357; https://doi.org/10.3390/s23063357 - 22 Mar 2023
Cited by 6 | Viewed by 2352
Abstract
In order to balance the performance index and computational efficiency of the active suspension control system, this paper offers a fast distributed model predictive control (DMPC) method based on multi-agents for the active suspension system. Firstly, a seven-degrees-of-freedom model of the vehicle is [...] Read more.
In order to balance the performance index and computational efficiency of the active suspension control system, this paper offers a fast distributed model predictive control (DMPC) method based on multi-agents for the active suspension system. Firstly, a seven-degrees-of-freedom model of the vehicle is created. This study establishes a reduced-dimension vehicle model based on graph theory in accordance with its network topology and mutual coupling constraints. Then, for engineering applications, a multi-agent-based distributed model predictive control method of an active suspension system is presented. The partial differential equation of rolling optimization is solved by a radical basis function (RBF) neural network. It improves the computational efficiency of the algorithm on the premise of satisfying multi-objective optimization. Finally, the joint simulation of CarSim and Matlab/Simulink shows that the control system can greatly minimize the vertical acceleration, pitch acceleration, and roll acceleration of the vehicle body. In particular, under the steering condition, it can take into account the safety, comfort, and handling stability of the vehicle at the same time. Full article
(This article belongs to the Topic Vehicle Dynamics and Control)
Show Figures

Figure 1

Figure 1
<p>Overall framework of the paper.</p>
Full article ">Figure 2
<p><b>Seven</b>-DOF Model of the Full Vehicle.</p>
Full article ">Figure 3
<p>Multi-agent communication topology.</p>
Full article ">Figure 4
<p>Pavement Incentives.</p>
Full article ">Figure 5
<p>RBF neural network prediction result graph.</p>
Full article ">Figure 6
<p>Vehicles under normal driving conditions: (<b>a</b>) Suspension actuation force. (<b>b</b>) Vertical acceleration. (<b>c</b>) Pitch angular acceleration. (<b>d</b>) Roll angular acceleration.</p>
Full article ">Figure 6 Cont.
<p>Vehicles under normal driving conditions: (<b>a</b>) Suspension actuation force. (<b>b</b>) Vertical acceleration. (<b>c</b>) Pitch angular acceleration. (<b>d</b>) Roll angular acceleration.</p>
Full article ">Figure 7
<p>Vehicles under turning conditions: (<b>a</b>) Suspension actuation force. (<b>b</b>) Vertical acceleration. (<b>c</b>) Pitch angular acceleration. (<b>d</b>) Roll angular acceleration.</p>
Full article ">Figure 7 Cont.
<p>Vehicles under turning conditions: (<b>a</b>) Suspension actuation force. (<b>b</b>) Vertical acceleration. (<b>c</b>) Pitch angular acceleration. (<b>d</b>) Roll angular acceleration.</p>
Full article ">Figure 8
<p>Comparing Results with Conventional Model Predictive Control: (<b>a</b>)Vertical acceleration. (<b>b</b>) Pitch angular acceleration. (<b>c</b>) Roll angular acceleration.</p>
Full article ">
27 pages, 739 KiB  
Article
Direction of Arrival Method for L-Shaped Array with RF Switch: An Embedded Implementation Perspective
by Tiago Troccoli, Juho Pirskanen, Jari Nurmi, Aleksandr Ometov, Jorge Morte, Elena Simona Lohan and Ville Kaseva
Sensors 2023, 23(6), 3356; https://doi.org/10.3390/s23063356 - 22 Mar 2023
Cited by 3 | Viewed by 2917
Abstract
This paper addresses the challenge of implementing Direction of Arrival (DOA) methods for indoor localization using Internet of Things (IoT) devices, particularly with the recent direction-finding capability of Bluetooth. DOA methods are complex numerical methods that require significant computational resources and can quickly [...] Read more.
This paper addresses the challenge of implementing Direction of Arrival (DOA) methods for indoor localization using Internet of Things (IoT) devices, particularly with the recent direction-finding capability of Bluetooth. DOA methods are complex numerical methods that require significant computational resources and can quickly deplete the batteries of small embedded systems typically found in IoT networks. To address this challenge, the paper presents a novel Unitary R-D Root MUSIC for L-shaped arrays that is tailor-made for such devices utilizing a switching protocol defined by Bluetooth. The solution exploits the radio communication system design to speed up execution, and its root-finding method circumvents complex arithmetic despite being used for complex polynomials. The paper carries out experiments on energy consumption, memory footprint, accuracy, and execution time in a commercial constrained embedded IoT device series without operating systems and software layers to prove the viability of the implemented solution. The results demonstrate that the solution achieves good accuracy and attains an execution time of a few milliseconds, making it a viable solution for DOA implementation in IoT devices. Full article
Show Figures

Figure 1

Figure 1
<p>An IoT mesh network with a DOA-based positioning system.</p>
Full article ">Figure 2
<p>An overview example of a node’s hardware architecture.</p>
Full article ">Figure 3
<p>Depiction of L-shaped array with its antennas (black dots), angles, and the signal direction.</p>
Full article ">Figure 4
<p>Depiction of the transmitter and receiver operations.</p>
Full article ">Figure 5
<p>Example of the Round Robin switch pattern of a L-shaped array with three antennas. Only the sample slots are shown.</p>
Full article ">Figure 6
<p>Algorithm overview of the two covariance noise subspace computations. The left method calculates that covariance directly, but the right one calculates indirectly via the signal subspace.</p>
Full article ">Figure 7
<p>Overview of the experiment.</p>
Full article ">Figure 8
<p>The implemented solution with RF switch compensation (a) and without it (b) run in a simulation (MATLAB).</p>
Full article ">Figure 9
<p>RMSE of the accuracy over the SNR run in a nRF52840 SoC.</p>
Full article ">
21 pages, 9140 KiB  
Article
Recognition of Occluded Goods under Prior Inference Based on Generative Adversarial Network
by Mingxuan Cao, Kai Xie, Feng Liu, Bohao Li, Chang Wen, Jianbiao He and Wei Zhang
Sensors 2023, 23(6), 3355; https://doi.org/10.3390/s23063355 - 22 Mar 2023
Viewed by 1691
Abstract
Aiming at the recognition of intelligent retail dynamic visual container goods, two problems that lead to low recognition accuracy must be addressed; one is the lack of goods features caused by the occlusion of the hand, and the other is the high similarity [...] Read more.
Aiming at the recognition of intelligent retail dynamic visual container goods, two problems that lead to low recognition accuracy must be addressed; one is the lack of goods features caused by the occlusion of the hand, and the other is the high similarity of goods. Therefore, this study proposes an approach for occluding goods recognition based on a generative adversarial network combined with prior inference to address the two abovementioned problems. With DarkNet53 as the backbone network, semantic segmentation is used to locate the occluded part in the feature extraction network, and simultaneously, the YOLOX decoupling head is used to obtain the detection frame. Subsequently, a generative adversarial network under prior inference is used to restore and expand the features of the occluded parts, and a multi-scale spatial attention and effective channel attention weighted attention mechanism module is proposed to select fine-grained features of goods. Finally, a metric learning method based on von Mises–Fisher distribution is proposed to increase the class spacing of features to achieve the effect of feature distinction, whilst the distinguished features are utilized to recognize goods at a fine-grained level. The experimental data used in this study were all obtained from the self-made smart retail container dataset, which contains a total of 12 types of goods used for recognition and includes four couples of similar goods. Experimental results reveal that the peak signal-to-noise ratio and structural similarity under improved prior inference are 0.7743 and 0.0183 higher than those of the other models, respectively. Compared with other optimal models, mAP improves the recognition accuracy by 1.2% and the recognition accuracy by 2.82%. This study solves two problems: one is the occlusion caused by hands, and the other is the high similarity of goods, thus meeting the requirements of commodity recognition accuracy in the field of intelligent retail and exhibiting good application prospects. Full article
(This article belongs to the Special Issue Image Processing and Pattern Recognition Based on Deep Learning)
Show Figures

Figure 1

Figure 1
<p>Algorithm flow of the proposed architecture for goods recognition. The numbers 1 and 2 in the <b>upper left</b> and <b>upper right</b> corners represent the first and second parts of GAN pretraining.</p>
Full article ">Figure 2
<p>Semantic Inference Module. The current encoding feature <math display="inline"><semantics> <mrow> <msub> <mi>ϕ</mi> <mi>l</mi> </msub> </mrow> </semantics></math> and decoding features <math display="inline"><semantics> <mrow> <msub> <mi>φ</mi> <mrow> <mi>l</mi> <mo>−</mo> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math> are sent to SIM to be fused with a skipping connection.</p>
Full article ">Figure 3
<p>Structure of generator.</p>
Full article ">Figure 4
<p>Feature extraction of RGB three-channel.</p>
Full article ">Figure 5
<p>Flow of features selection.</p>
Full article ">Figure 6
<p>Cross-section of features on unit hypersphere. Different colored points represent feature vectors of different classes.</p>
Full article ">Figure 7
<p>The number of each kind of good in our dataset.</p>
Full article ">Figure 8
<p>Some of the pictures in our dataset. Below these images are the names of these products.</p>
Full article ">Figure 9
<p>The UI interface of proposed method.</p>
Full article ">Figure 10
<p>Comparison experiment of feature restoration and expansion.</p>
Full article ">Figure 11
<p>Ablation experiment of P–R curve under different combinations of networks.</p>
Full article ">Figure 12
<p>Heatmaps of various attention mechanism algorithms.</p>
Full article ">Figure 13
<p>Comparison experiment of training parameters on various algorithms. Note: Bold is the best result.</p>
Full article ">Figure 14
<p>Diagrams of recognition results on various scenarios and persons.</p>
Full article ">
16 pages, 3093 KiB  
Article
Using a Hybrid Neural Network and a Regularized Extreme Learning Machine for Human Activity Recognition with Smartphone and Smartwatch
by Tan-Hsu Tan, Jyun-Yu Shih, Shing-Hong Liu, Mohammad Alkhaleefah, Yang-Lang Chang and Munkhjargal Gochoo
Sensors 2023, 23(6), 3354; https://doi.org/10.3390/s23063354 - 22 Mar 2023
Cited by 5 | Viewed by 2458
Abstract
Mobile health (mHealth) utilizes mobile devices, mobile communication techniques, and the Internet of Things (IoT) to improve not only traditional telemedicine and monitoring and alerting systems, but also fitness and medical information awareness in daily life. In the last decade, human activity recognition [...] Read more.
Mobile health (mHealth) utilizes mobile devices, mobile communication techniques, and the Internet of Things (IoT) to improve not only traditional telemedicine and monitoring and alerting systems, but also fitness and medical information awareness in daily life. In the last decade, human activity recognition (HAR) has been extensively studied because of the strong correlation between people’s activities and their physical and mental health. HAR can also be used to care for elderly people in their daily lives. This study proposes an HAR system for classifying 18 types of physical activity using data from sensors embedded in smartphones and smartwatches. The recognition process consists of two parts: feature extraction and HAR. To extract features, a hybrid structure consisting of a convolutional neural network (CNN) and a bidirectional gated recurrent unit GRU (BiGRU) was used. For activity recognition, a single-hidden-layer feedforward neural network (SLFN) with a regularized extreme machine learning (RELM) algorithm was used. The experimental results show an average precision of 98.3%, recall of 98.4%, an F1-score of 98.4%, and accuracy of 98.3%, which results are superior to those of existing schemes. Full article
(This article belongs to the Special Issue Recent Developments in Wireless Network Technology)
Show Figures

Figure 1

Figure 1
<p>Structural diagram of the proposed HAR system, including the data processing unit, the feature extraction unit, and the classification unit.</p>
Full article ">Figure 2
<p>Structural diagram of the feature-extraction model.</p>
Full article ">Figure 3
<p>The learning curves for the hybrid CNN+LSTM model: (<b>a</b>) the accuracy and (<b>b</b>) loss curves.</p>
Full article ">Figure 4
<p>The learning curves for the hybrid CNN+GRU model: (<b>a</b>) the accuracy and (<b>b</b>) loss curves.</p>
Full article ">Figure 5
<p>The learning curves for the hybrid CNN+BiLSTM model: (<b>a</b>) the accuracy and (<b>b</b>) loss curves.</p>
Full article ">Figure 6
<p>The learning curves for the hybrid CNN+BiGRU model: (<b>a</b>) the accuracy and (<b>b</b>) loss curves.</p>
Full article ">Figure 7
<p>The confusion matrix of the classification of eighteen activities for the ELM algorithm.</p>
Full article ">Figure 8
<p>The confusion matrix for the classification of eighteen activities for the RELM algorithm.</p>
Full article ">
18 pages, 3725 KiB  
Article
A Scheduling Method of Using Multiple SAR Satellites to Observe a Large Area
by Qicun Zheng, Haixia Yue, Dacheng Liu and Xiaoxue Jia
Sensors 2023, 23(6), 3353; https://doi.org/10.3390/s23063353 - 22 Mar 2023
Cited by 2 | Viewed by 1798
Abstract
This paper presents a scheduling problem of using multiple synthetic aperture radar (SAR) satellites to observe a large irregular area (SMA). SMA is usually considered as a kind of nonlinear combinatorial optimized problem and its solution space strongly coupled with geometry grows exponentially [...] Read more.
This paper presents a scheduling problem of using multiple synthetic aperture radar (SAR) satellites to observe a large irregular area (SMA). SMA is usually considered as a kind of nonlinear combinatorial optimized problem and its solution space strongly coupled with geometry grows exponentially with the increasing magnitude of SMA. It is assumed that each solution of SMA yields a profit associated with the acquired portion of the target area, and the objective of this paper is to find the optimal solution yielding the maximal profit. The SMA is solved by means of a new method composed of three successive phases, namely, grid space construction, candidate strip generation and strip selection. First, the grid space construction is proposed to discretize the irregular area into a set of points in a specific plane rectangular coordinate system and calculate the total profit of a solution of SMA. Then, the candidate strip generation is designed to produce numerous candidate strips based on the grid space of the first phase. At last, in the strip selection, the optimal schedule for all the SAR satellites is developed based on the result of the candidate strip generation. In addition, this paper proposes a normalized grid space construction algorithm, a candidate strip generation algorithm and a tabu search algorithm with variable neighborhoods for the three successive phases, respectively. To verify the effectiveness of the proposed method in this paper, we perform simulation experiments on several scenarios and compare our method with the other seven methods. Compared to the best of the other seven methods, our proposed method can improve profit by 6.38% using the same resources. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic illustration of SAR satellite imaging. (<b>a</b>) Strips with different look angles; (<b>b</b>) start time and end time of a strip.</p>
Full article ">Figure 2
<p>The piecewise linear function for profit calculation.</p>
Full article ">Figure 3
<p>Main GCS framework for SMA.</p>
Full article ">Figure 4
<p>Schematic diagram of the grid space construction.</p>
Full article ">Figure 5
<p>Candidate strip based on grid. (<b>a</b>) Ascending; (<b>b</b>) descending.</p>
Full article ">Figure 6
<p>Two modes to generate candidate strip. (<b>a</b>) Parallel split; (<b>b</b>) grid split proposed in this paper.</p>
Full article ">Figure 7
<p>0–1 Integer vector coding method.</p>
Full article ">Figure 8
<p>The flow diagram of the VNTS proposed in strip selection.</p>
Full article ">Figure 9
<p>The profit of the obtained schedule versus the run time of CPU. (<b>a</b>) The profit of schedules for Belarus using the GCS strategy; (<b>b</b>) the profit of schedules for Belarus using the GTS strategy; (<b>c</b>) the profit of schedules for Gabon using the GCS strategy; (<b>d</b>) the profit of schedules for Gabon using the GTS strategy.</p>
Full article ">Figure 10
<p>The profit of the final schedule versus the number of imaging opportunities. (<b>a</b>) The profit of the final schedule for Belarus using the GCS strategy; (<b>b</b>) the profit of the final schedule for Belarus using the GTS strategy; (<b>c</b>) the profit of the final schedule for Gabon using the GCS strategy; (<b>d</b>) the profit of the final schedule for Gabon using the GTS strategy.</p>
Full article ">
9 pages, 2637 KiB  
Article
Direct Ink-Write Printing of Ceramic Clay with an Embedded Wireless Temperature and Relative Humidity Sensor
by Cory Marquez, Jesus J. Mata, Anabel Renteria, Diego Gonzalez, Sofia Gabriela Gomez, Alexis Lopez, Annette N. Baca, Alan Nuñez, Md Sahid Hassan, Vincent Burke, Dina Perlasca, Yifeng Wang, Yongliang Xiong, Jessica N. Kruichak, David Espalin and Yirong Lin
Sensors 2023, 23(6), 3352; https://doi.org/10.3390/s23063352 - 22 Mar 2023
Cited by 10 | Viewed by 2257
Abstract
This research presents a simple method to additively manufacture Cone 5 porcelain clay ceramics by using the direct ink-write (DIW) printing technique. DIW has allowed the application of extruding highly viscous ceramic materials with relatively high-quality and good mechanical properties, which additionally allows [...] Read more.
This research presents a simple method to additively manufacture Cone 5 porcelain clay ceramics by using the direct ink-write (DIW) printing technique. DIW has allowed the application of extruding highly viscous ceramic materials with relatively high-quality and good mechanical properties, which additionally allows a freedom of design and the capability of manufacturing complex geometrical shapes. Clay particles were mixed with deionized (DI) water at different ratios, where the most suitable composition for 3D printing was observed at a 1:5 w/c ratio (16.2 wt.%. of DI water). Differential geometrical designs were printed to demonstrate the printing capabilities of the paste. In addition, a clay structure was fabricated with an embedded wireless temperature and relative humidity (RH) sensor during the 3D printing process. The embedded sensor read up to 65% RH and temperatures of up to 85 °F from a maximum distance of 141.7 m. The structural integrity of the selected 3D printed geometries was confirmed through the compressive strength of fired and non-fired clay samples, with strengths of 70 MPa and 90 MPa, respectively. This research demonstrates the feasibility of using the DIW printing of porcelain clay with embedded sensors, with fully functional temperature- and humidity-sensing capabilities. Full article
(This article belongs to the Special Issue Emerging Functional Materials for Sensor Applications)
Show Figures

Figure 1

Figure 1
<p>Cone 5 porcelain print through DIW: non-fired (<b>a</b>) and fired (<b>b</b>) samples.</p>
Full article ">Figure 2
<p>Data of viscosity as a function of shear rate for porcelain clay.</p>
Full article ">Figure 3
<p>Initial (<b>a</b>) and redesigned (<b>b</b>) model design.</p>
Full article ">Figure 4
<p>Schematic illustration of DIW printing with embedded sensor (<b>A</b>) and images of the embedded sensor printing stages (<b>B</b>).</p>
Full article ">Figure 5
<p>Porcelain clay XRD of non-fired and fired clay samples.</p>
Full article ">Figure 6
<p>Stress vs. strain compression analysis of non-fired clay vs. fired clay.</p>
Full article ">Figure 7
<p>Relative humidity and temperature readings over time from the clay embedded sensor measured at 141.7 m.</p>
Full article ">
13 pages, 40819 KiB  
Article
Elastic Textile Wristband for Bioimpedance Measurements
by Giuseppina Monti, Federica Raheli, Andrea Recupero and Luciano Tarricone
Sensors 2023, 23(6), 3351; https://doi.org/10.3390/s23063351 - 22 Mar 2023
Cited by 1 | Viewed by 1768
Abstract
In this paper, wristband electrodes for hand-to-hand bioimpedance measurements are investigated. The proposed electrodes consist of a stretchable conductive knitted fabric. Different implementations have been developed and compared with Ag/AgCl commercial electrodes. Hand-to-hand measurements at 50 kHz on forty healthy subjects have been [...] Read more.
In this paper, wristband electrodes for hand-to-hand bioimpedance measurements are investigated. The proposed electrodes consist of a stretchable conductive knitted fabric. Different implementations have been developed and compared with Ag/AgCl commercial electrodes. Hand-to-hand measurements at 50 kHz on forty healthy subjects have been carried out and the Passing–Bablok regression method has been exploited to compare the proposed textile electrodes with commercial ones. It is demonstrated that the proposed designs guarantee reliable measurements and easy and comfortable use, thus representing an excellent solution for the development of a wearable bioimpedance measurement system. Full article
Show Figures

Figure 1

Figure 1
<p>Setup adopted for the measurements with commercial (<b>left</b>) and textile electrodes (<b>middle</b> and <b>right</b>).</p>
Full article ">Figure 2
<p>Schematic representation of the textile electrodes analyzed in this paper. All the electrodes consisted of a bracelet fabricated by combining textile elastic materials and the conductive fabric (Shielded Technik-tex P180 + B by Statex). The textiles referred to as “Textile Elastic Band 1” and “Textile Elastic Band 2” differed in their thicknesses and stretchability as follows: Textile Elastic Band 1 was thicker and more stretchable than Textile Elastic Band 2.</p>
Full article ">Figure 3
<p>Photographs of the fabricated textile electrodes.</p>
Full article ">Figure 4
<p>Comparison of the values obtained for |<span class="html-italic">Z</span>| and PA by using the Ag/AgCl commercial electrodes and the Textile C electrodes.</p>
Full article ">Figure 5
<p>Passing–Bablok regression analysis results obtained for |<span class="html-italic">Z</span>|. The thin dotted line is the line for identity (y = x) while the thick blue line is the line for best fit. The red dashed lines indicate the 95% confidence intervals.</p>
Full article ">Figure 6
<p>Passing–Bablok regression analysis results obtained for PA. The thin dotted line is the line for identity (y = x) while the thick blue line is the line for best fit. The red dashed lines indicate the 95% confidence intervals.</p>
Full article ">Figure 6 Cont.
<p>Passing–Bablok regression analysis results obtained for PA. The thin dotted line is the line for identity (y = x) while the thick blue line is the line for best fit. The red dashed lines indicate the 95% confidence intervals.</p>
Full article ">Figure 7
<p>Results of the ten measurements performed on PUT 1.</p>
Full article ">Figure 8
<p>Investigation of the performance of the textile electrodes after washing.</p>
Full article ">
22 pages, 704 KiB  
Review
Wearable and Portable Devices for Acquisition of Cardiac Signals while Practicing Sport: A Scoping Review
by Sofia Romagnoli, Francesca Ripanti, Micaela Morettini, Laura Burattini and Agnese Sbrollini
Sensors 2023, 23(6), 3350; https://doi.org/10.3390/s23063350 - 22 Mar 2023
Cited by 16 | Viewed by 4258
Abstract
Wearable and portable devices capable of acquiring cardiac signals are at the frontier of the sport industry. They are becoming increasingly popular for monitoring physiological parameters while practicing sport, given the advances in miniaturized technologies, powerful data, and signal processing applications. Data and [...] Read more.
Wearable and portable devices capable of acquiring cardiac signals are at the frontier of the sport industry. They are becoming increasingly popular for monitoring physiological parameters while practicing sport, given the advances in miniaturized technologies, powerful data, and signal processing applications. Data and signals acquired by these devices are increasingly used to monitor athletes’ performances and thus to define risk indices for sport-related cardiac diseases, such as sudden cardiac death. This scoping review investigated commercial wearable and portable devices employed for cardiac signal monitoring during sport activity. A systematic search of the literature was conducted on PubMed, Scopus, and Web of Science. After study selection, a total of 35 studies were included in the review. The studies were categorized based on the application of wearable or portable devices in (1) validation studies, (2) clinical studies, and (3) development studies. The analysis revealed that standardized protocols for validating these technologies are necessary. Indeed, results obtained from the validation studies turned out to be heterogeneous and scarcely comparable, since the metrological characteristics reported were different. Moreover, the validation of several devices was carried out during different sport activities. Finally, results from clinical studies highlighted that wearable devices are crucial to improve athletes’ performance and to prevent adverse cardiovascular events. Full article
Show Figures

Figure 1

Figure 1
<p>Flowchart of systematic literature search study selection and classification.</p>
Full article ">
16 pages, 99728 KiB  
Article
A Cost-Effective Lightning Current Measuring Instrument with Wide Current Range Detection Using Dual Signal Conditioning Circuits
by Youngjun Lee and Young Sam Lee
Sensors 2023, 23(6), 3349; https://doi.org/10.3390/s23063349 - 22 Mar 2023
Viewed by 2235
Abstract
Lightning strikes can cause significant damage to critical infrastructure and pose a serious threat to public safety. To ensure the safety of facilities and investigate the causes of lightning accidents, we propose a cost-effective design method for a lightning current measuring instrument that [...] Read more.
Lightning strikes can cause significant damage to critical infrastructure and pose a serious threat to public safety. To ensure the safety of facilities and investigate the causes of lightning accidents, we propose a cost-effective design method for a lightning current measuring instrument that uses a Rogowski coil and dual signal conditioning circuits to detect a wide range of lightning currents, ranging from hundreds of A to hundreds of kA. To implement the proposed lightning current measuring instrument, we design signal conditioning circuits and software capable of detecting and analyzing lightning currents from ±500 A to ±100 kA. By employing dual signal conditioning circuits, it offers the advantage of detecting a wide range of lightning currents compared to existing lightning current measuring instruments. The proposed instrument has the following features: First, the peak current, polarity, T1 (front time), T2 (time to half value), and Q (amount of energy of the lightning current) can be analyzed and measured with a fast sampling time of 380 ns. Second, it can distinguish whether a lightning current is induced or direct. Third, a built-in SD card is provided to save the detected lightning data. Finally, it provides Ethernet communication capability for remote monitoring. The performance of the proposed instrument is evaluated and validated by applying induced and direct lightning using a lightning current generator. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Number of lightning strike by year in South Korea from 2012 to 2021. (<b>b</b>) Number of lightning strikes by month in South Korea in 2021.</p>
Full article ">Figure 2
<p>Lightning accident at the Seohae Bridge.</p>
Full article ">Figure 3
<p>Shape of the proposed LCMI.</p>
Full article ">Figure 4
<p>Block diagram of the proposed LCMI.</p>
Full article ">Figure 5
<p>PCBs of the proposed LCMI.</p>
Full article ">Figure 6
<p>(<b>a</b>) Direct lightning current waveform; (<b>b</b>) induced lightning current waveform.</p>
Full article ">Figure 7
<p>Shape of the Rogowski coil current sensor.</p>
Full article ">Figure 8
<p>Dual signal conditioning circuits capable of processing of the lightning current from ±500 A to ±100 kA.</p>
Full article ">Figure 9
<p>A signal processing diagram of the ADC depending on the magnitude of the lightning current.</p>
Full article ">Figure 10
<p>Experimental test setup for a signal conditioning circuit.</p>
Full article ">Figure 11
<p>An example of the experimental results.</p>
Full article ">Figure 12
<p>Signal processing software algorithm of the microcontroller for the lightning current.</p>
Full article ">Figure 13
<p>Software flowchart of the microcontroller ATMEGA1280.</p>
Full article ">Figure 14
<p>(<b>a</b>) First screen of the LCD. (<b>b</b>) Second screen of the LCD. (<b>c</b>) Third screen of the LCD. (<b>d</b>) Fourth screen of the LCD.</p>
Full article ">Figure 15
<p>Screen capture of a remote monitoring program.</p>
Full article ">Figure 16
<p>Place where the proposed LCMI is installed on the cable bridge.</p>
Full article ">
16 pages, 13582 KiB  
Article
Non-Destructive Inspection of High Temperature Piping Combining Ultrasound and Eddy Current Testing
by David Santos, Miguel A. Machado, João Monteiro, José P. Sousa, Carla S. Proença, Fernando S. Crivellaro, Luís S. Rosado and Telmo G. Santos
Sensors 2023, 23(6), 3348; https://doi.org/10.3390/s23063348 - 22 Mar 2023
Cited by 15 | Viewed by 3784
Abstract
This paper presents an automated Non-Destructive Testing (NDT) system for the in-service inspection of orbital welds on tubular components operating at temperatures as high as 200 °C. The combination of two different NDT methods and respective inspection systems is here proposed to cover [...] Read more.
This paper presents an automated Non-Destructive Testing (NDT) system for the in-service inspection of orbital welds on tubular components operating at temperatures as high as 200 °C. The combination of two different NDT methods and respective inspection systems is here proposed to cover the detection of all potential defective weld conditions. The proposed NDT system combines ultrasounds and Eddy current techniques with dedicated approaches for dealing with high temperature conditions. Phased array ultrasound was employed, searching for volumetric defects within the weld bead volume while Eddy currents were used to look for surface and sub-surface cracks. The results from the phased array ultrasound results showed the effectiveness of the cooling mechanisms and that temperature effects on sound attenuation can be easily compensated for up to 200 °C. The Eddy current results showed almost no influence when temperatures were raised up to 300 °C. Full article
(This article belongs to the Special Issue Advanced Sensing and Evaluating Technology in Nondestructive Testing)
Show Figures

Figure 1

Figure 1
<p>Automated inspection system elements.</p>
Full article ">Figure 2
<p>ECT system Validation: (<b>a</b>) automated inspection system coupled with ECT probes while performing a scan along the pipe perimeter; (<b>b</b>) thermographic image with temperature represented in degrees Celsius (°C).</p>
Full article ">Figure 3
<p>(<b>a</b>) Heating system assembly and (<b>b</b>) electrical wires for the resistances.</p>
Full article ">Figure 4
<p>Beam variation with temperature.</p>
Full article ">Figure 5
<p>US Probe and wedge assembly.</p>
Full article ">Figure 6
<p>Scan plan. (<b>a</b>) Probe position; (<b>b</b>) Probe movement.</p>
Full article ">Figure 7
<p>A-Scans obtained for: (<b>a</b>) weld root notch at 18 °C, (<b>b</b>) weld root notch at 122 °C, (<b>c</b>) weld root notch at 200 °C, (<b>d</b>) weld toe notch at 18 °C, (<b>e</b>) weld toe notch at 120 °C, (<b>f</b>) weld toe notch at 203 °C, (<b>g</b>) lack of fusion at 18 °C, (<b>h</b>) lack of fusion at 120 °C, and (<b>i</b>) lack of fusion at 201 °C.</p>
Full article ">Figure 8
<p>Variation in attenuation with temperature, for different defects.</p>
Full article ">Figure 9
<p>Variation of the maximum amplitude with the temperature for different defects.</p>
Full article ">Figure 10
<p>Variation in the length of defects with temperature, for (<b>a</b>) weld root notch, (<b>b</b>) weld toe notch and (<b>c</b>) lack of fusion.</p>
Full article ">Figure 10 Cont.
<p>Variation in the length of defects with temperature, for (<b>a</b>) weld root notch, (<b>b</b>) weld toe notch and (<b>c</b>) lack of fusion.</p>
Full article ">Figure 11
<p>PCB probe developed for the inspection of the pipe base material. Linear excitation element cantered within the two differential coils.</p>
Full article ">Figure 12
<p>Construction process and chassis parts for the probe developed for the inspection of the pipe base material at high temperature. (<b>a</b>) PCB probe and lid assembled; (<b>b</b>) probe CAD cross section; (<b>c</b>) PCB probe and lid (view from bottom); (<b>d</b>) Assembled probe.</p>
Full article ">Figure 13
<p>Construction process and chassis parts for the probe developed for the inspection of the pipe weld bead at high temperatures. (<b>a</b>) probe assembly cut-out in CAD; (<b>b</b>) the lid with the sealing O-ring and connectors for cooling and signaling; (<b>c</b>) the FDM-printed support used to hold the orthogonal coils; (<b>d</b>) the final external aspect of the probe.</p>
Full article ">Figure 14
<p>Output signal of the base material probe when scanning through the artificially made defect with different orientations.</p>
Full article ">Figure 15
<p>C-scan output signal operating the PCB probe for pipe base material at 1 MHz at 240 °C.</p>
Full article ">Figure 16
<p>ECT probe developed for the weld bead inspection performing a scan on the bead’s right side.</p>
Full article ">Figure 17
<p>Output signal of the weld bead probe when scanning along the weld bead interface and centre.</p>
Full article ">
12 pages, 704 KiB  
Article
Wearable Activity Trackers Objectively Measure Incidental Physical Activity in Older Adults Undergoing Aortic Valve Replacement
by Nicola Straiton, Matthew Hollings, Janice Gullick and Robyn Gallagher
Sensors 2023, 23(6), 3347; https://doi.org/10.3390/s23063347 - 22 Mar 2023
Cited by 1 | Viewed by 2116
Abstract
Background: For older adults with severe aortic stenosis (AS) undergoing aortic valve replacement (AVR), recovery of physical function is important, yet few studies objectively measure it in real-world environments. This exploratory study explored the acceptability and feasibility of using wearable trackers to measure [...] Read more.
Background: For older adults with severe aortic stenosis (AS) undergoing aortic valve replacement (AVR), recovery of physical function is important, yet few studies objectively measure it in real-world environments. This exploratory study explored the acceptability and feasibility of using wearable trackers to measure incidental physical activity (PA) in AS patients before and after AVR. Methods: Fifteen adults with severe AS wore an activity tracker at baseline, and ten at one month follow-up. Functional capacity (six-minute walk test, 6MWT) and HRQoL (SF 12) were also assessed. Results: At baseline, AS participants (n = 15, 53.3% female, mean age 82.3 ± 7.0 years) wore the tracker for four consecutive days more than 85% of the total prescribed time, this improved at follow-up. Before AVR, participants demonstrated a wide range of incidental PA (step count median 3437 per day), and functional capacity (6MWT median 272 m). Post-AVR, participants with the lowest incidental PA, functional capacity, and HRQoL at baseline had the greatest improvements within each measure; however, improvements in one measure did not translate to improvements in another. Conclusion: The majority of older AS participants wore the activity trackers for the required time period before and after AVR, and the data attained were useful for understanding AS patients’ physical function. Full article
(This article belongs to the Special Issue Wearable and Unobtrusive Technologies for Healthcare Monitoring)
Show Figures

Figure 1

Figure 1
<p>PRISMA diagram.</p>
Full article ">Figure 2
<p>Correlation matrix: association between functional capacity assessments, incidental PA outcomes, and key clinical and demographic variables pre- and post-AVR. 6MWT = six-minute walk test, Steps = average steps per day, MVPA = mean vigorous physical activity, Sed hrs = sedentary hours, Light PA = light physical activity, Mod PA = moderate physical activity, Vig PA = vigorous physical activity, and 5MWT = gait speed.</p>
Full article ">
11 pages, 2723 KiB  
Article
Binding of SARS-CoV-2 Structural Proteins to Hemoglobin and Myoglobin Studied by SPR and DR LPG
by Georgi Dyankov, Petia Genova-Kalou, Tinko Eftimov, Sanaz Shoar Ghaffari, Vihar Mankov, Hristo Kisov, Petar Veselinov, Evdokia Hikova and Nikola Malinowski
Sensors 2023, 23(6), 3346; https://doi.org/10.3390/s23063346 - 22 Mar 2023
Cited by 4 | Viewed by 3425
Abstract
One of the first clinical observations related to COVID-19 identified hematological dysfunctions. These were explained by theoretical modeling, which predicted that motifs from SARS-CoV-2 structural proteins could bind to porphyrin. At present, there is very little experimental data that could provide reliable information [...] Read more.
One of the first clinical observations related to COVID-19 identified hematological dysfunctions. These were explained by theoretical modeling, which predicted that motifs from SARS-CoV-2 structural proteins could bind to porphyrin. At present, there is very little experimental data that could provide reliable information about possible interactions. The surface plasmon resonance (SPR) method and double resonance long period grating (DR LPG) were used to identify the binding of S/N protein and the receptor bind domain (RBD) to hemoglobin (Hb) and myoglobin (Mb). SPR transducers were functionalized with Hb and Mb, while LPG transducers, were only with Hb. Ligands were deposited by the matrix-assisted laser evaporation (MAPLE) method, which guarantees maximum interaction specificity. The experiments carried out showed S/N protein binding to Hb and Mb and RBD binding to Hb. Apart from that, they demonstrated that chemically-inactivated virus-like particles (VLPs) interact with Hb. The binding activity of S/N- and RBD proteins was assessed. It was found that protein binding fully inhibited heme functionality. The registered N protein binding to Hb/Mb is the first experimental fact that supports theoretical predictions. This fact suggests another function of this protein, not only binding RNA. The lower RBD binding activity reveals that other functional groups of S protein participate in the interaction. The high-affinity binding of these proteins to Hb provides an excellent opportunity for assessing the effectiveness of inhibitors targeting S/N proteins. Full article
Show Figures

Figure 1

Figure 1
<p>SPR chip: gilded diffraction grating with immobilized Hb/Mb.</p>
Full article ">Figure 2
<p>LLPG around turning point: (<b>a</b>) dispersion dependence and TAPs of photosensitive fiber PS1250/1500 (Fibercore) for different cladding modes; (<b>b</b>) TAP LPG splitting and transformation into a double resonance grating for the 11th cladding mode.</p>
Full article ">Figure 3
<p>A spectral shift as a result of S/N proteins binding to: (<b>a</b>) Hb; (<b>b</b>) Mb.</p>
Full article ">Figure 4
<p>The spectral shift resulting from Hb/RBD interaction as a function of concentration.</p>
Full article ">Figure 5
<p>SPR response as a result of the Hb–VLP interaction at different concentrations.</p>
Full article ">Figure 6
<p>Refractive index change as a result of Hb binding to: (<b>a</b>) N and S proteins, (<b>b</b>) RBD proteins.</p>
Full article ">
15 pages, 3547 KiB  
Article
Time-Series Representation Learning in Topology Prediction for Passive Optical Network of Telecom Operators
by Haoran Zhao, Yuchen Fang, Yuxiang Zhao, Zheng Tian, Weinan Zhang, Xidong Feng, Li Yu, Wei Li, Hulei Fan and Tiema Mu
Sensors 2023, 23(6), 3345; https://doi.org/10.3390/s23063345 - 22 Mar 2023
Viewed by 1550
Abstract
The passive optical network (PON) is widely used in optical fiber communication thanks to its low cost and low resource consumption. However, the passiveness brings about a critical problem that it requires manual work to identify the topology structure, which is costly and [...] Read more.
The passive optical network (PON) is widely used in optical fiber communication thanks to its low cost and low resource consumption. However, the passiveness brings about a critical problem that it requires manual work to identify the topology structure, which is costly and prone to bringing noise to the topology logs. In this paper, we provide a base solution firstly introducing neural networks for such problems, and based on that solution we propose a complete methodology (PT-Predictor) for predicting PON topology through representation learning on its optical power data. Specifically, we design useful model ensembles (GCE-Scorer) to extract the features of optical power with noise-tolerant training techniques integrated. We further implement a data-based aggregation algorithm (MaxMeanVoter) and a novel Transformer-based voter (TransVoter) to predict the topology. Compared with previous model-free methods, PT-Predictor is able to improve prediction accuracy by 23.1% in scenarios where data provided by telecom operators is sufficient, and by 14.8% in scenarios where data is temporarily insufficient. Besides, we identify a class of scenarios where PON topology does not follow a strict tree structure, and thus topology prediction cannot be effectively performed by relying on optical power data alone, which will be studied in our future work. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

Figure 1
<p>Topology of PON. OLTs are in factories of telecom operators while ONUs are in users’ rooms. The secondary splitter one ONU belongs to is registered, but not guaranteed due to passiveness and poor management. The higher level is confirmed.</p>
Full article ">Figure 2
<p>Waveforms of ONU’s optical power. The vertical axis represents the optical power. The horizontal axis represents the sampling point, which is sampled every 10 min. Each subplot shows the situation of one secondary splitter, and curves of different colors reflect the waveforms of different ONUs under the same secondary splitter. Similarity of two waveforms indicates that the two ONUs are suffering the same perturbation. (<b>a</b>) Ideal perturbation. (<b>b</b>) Non-ideal perturbation.</p>
Full article ">Figure 3
<p>Structure of base solution. Optical power and topology data are firstly combined and preprocessed to get sequences. Then the scoring model gives the scores for the similarity of each sampled sequence pair. Using the scores, the voting algorithm determines the true label (secondary splitter) of the target ONU.</p>
Full article ">Figure 4
<p>Preprocess and feature engineering.</p>
Full article ">Figure 5
<p>Composition of GCE-Scorer. (<b>a</b>) CNN-based scoring model. (<b>b</b>) GRU-based scoring model.</p>
Full article ">Figure 6
<p>Max voting from sequences to ONUs. Perturbation occurs randomly and occasionally, so we choose the sequence pair with the most similarity as the scores of ONU pair.</p>
Full article ">Figure 7
<p>Average voting from ONUs to secondary splitters. Noisy label is provided by RMS. True label is temporarily unknown to us, but it exists objectively and is to be found.</p>
Full article ">Figure 8
<p>Principle of TransVoter. All scores of sequence pairs concerning one ONU are aggregated and processed to predict its label.</p>
Full article ">Figure 9
<p>Ground truth topology of OLT 0 and OLT 1. (<b>a</b>) OLT 0. (<b>b</b>) OLT 1.</p>
Full article ">Figure 10
<p>Accuracy through different methods. (<b>a</b>) Normalization levels. (<b>b</b>) Scoring methods. (<b>c</b>) Data-based voters. (<b>d</b>) Voting methods.</p>
Full article ">Figure 11
<p>Waveforms of OLT 1. The vertical axis represents the optical power. The horizontal axis represents the sampling point, which is sampled every 10 min. Each subplot shows the situation of one secondary splitter, and curves of different colors reflect the waveforms of different ONUs under the same secondary splitter. Similarity of two waveforms indicates that the two ONUs are suffering the same perturbation. (<b>a</b>) Secondary Splitter 1. (<b>b</b>) Secondary Splitter 2.</p>
Full article ">Figure A1
<p>Relationship of filtering threshold and remaining ONUs. The horizontal axis represents the threshold. Sequences with null rates higher than the threshold will be filtered out. The vertical axis represents the number of remaining ONUs under the given threshold. (<b>a</b>) Trainset. (<b>b</b>) Testset.</p>
Full article ">
19 pages, 7975 KiB  
Article
Trusted Autonomous Operations of Distributed Satellite Systems Using Optical Sensors
by Kathiravan Thangavel, Dario Spiller, Roberto Sabatini, Stefania Amici, Nicolas Longepe, Pablo Servidia, Pier Marzocca, Haytham Fayek and Luigi Ansalone
Sensors 2023, 23(6), 3344; https://doi.org/10.3390/s23063344 - 22 Mar 2023
Cited by 13 | Viewed by 3233
Abstract
Recent developments in Distributed Satellite Systems (DSS) have undoubtedly increased mission value due to the ability to reconfigure the spacecraft cluster/formation and incrementally add new or update older satellites in the formation. These features provide inherent benefits, such as increased mission effectiveness, multi-mission [...] Read more.
Recent developments in Distributed Satellite Systems (DSS) have undoubtedly increased mission value due to the ability to reconfigure the spacecraft cluster/formation and incrementally add new or update older satellites in the formation. These features provide inherent benefits, such as increased mission effectiveness, multi-mission capabilities, design flexibility, and so on. Trusted Autonomous Satellite Operation (TASO) are possible owing to the predictive and reactive integrity features offered by Artificial Intelligence (AI), including both on-board satellites and in the ground control segments. To effectively monitor and manage time-critical events such as disaster relief missions, the DSS must be able to reconfigure autonomously. To achieve TASO, the DSS should have reconfiguration capability within the architecture and spacecraft should communicate with each other through an Inter-Satellite Link (ISL). Recent advances in AI, sensing, and computing technologies have resulted in the development of new promising concepts for the safe and efficient operation of the DSS. The combination of these technologies enables trusted autonomy in intelligent DSS (iDSS) operations, allowing for a more responsive and resilient approach to Space Mission Management (SMM) in terms of data collection and processing, especially when using state-of-the-art optical sensors. This research looks into the potential applications of iDSS by proposing a constellation of satellites in Low Earth Orbit (LEO) for near-real-time wildfire management. For spacecraft to continuously monitor Areas of Interest (AOI) in a dynamically changing environment, satellite missions must have extensive coverage, revisit intervals, and reconfiguration capability that iDSS can offer. Our recent work demonstrated the feasibility of AI-based data processing using state-of-the-art on-board astrionics hardware accelerators. Based on these initial results, AI-based software has been successively developed for wildfire detection on-board iDSS satellites. To demonstrate the applicability of the proposed iDSS architecture, simulation case studies are performed considering different geographic locations. Full article
Show Figures

Figure 1

Figure 1
<p>Classification of satellite systems.</p>
Full article ">Figure 2
<p>(<b>a</b>) Current state-of-the-art DSS operations; (<b>b</b>) DSS operation with ISL, i.e., iDSS. Adapted from [<a href="#B3-sensors-23-03344" class="html-bibr">3</a>].</p>
Full article ">Figure 3
<p>ISL classification [<a href="#B31-sensors-23-03344" class="html-bibr">31</a>].</p>
Full article ">Figure 4
<p>Link distance against data rates for optical and RF ISL systems. Adapted from [<a href="#B36-sensors-23-03344" class="html-bibr">36</a>].</p>
Full article ">Figure 5
<p>(<b>a</b>) ISL relationship between the orbits and the ground station; (<b>b</b>) proposed iDSS constellation with ISL.</p>
Full article ">Figure 6
<p>An instance of the proposed EO constellation illustration with inter-orbital plane ISL and ground link.</p>
Full article ">Figure 7
<p>Wildfire segmentation map of the hyperspectral imagery over Australia [<a href="#B56-sensors-23-03344" class="html-bibr">56</a>].</p>
Full article ">Figure 8
<p>An illustrative view of iDSS reconfiguration.</p>
Full article ">Figure 9
<p>Satellite Field of View: (<b>a</b>) nadir pointing; (<b>b</b>) reconfiguration at the entry; (<b>c</b>) reconfiguration at the exit [<a href="#B35-sensors-23-03344" class="html-bibr">35</a>].</p>
Full article ">Figure 10
<p>Australia: (<b>a</b>) system-wide access status; (<b>b</b>) system-wide access status with reconfiguration.</p>
Full article ">Figure 11
<p>Africa: (<b>a</b>) system-wide access status; (<b>b</b>) system-wide access status with reconfiguration.</p>
Full article ">Figure 12
<p>Italy: (<b>a</b>) system-wide access status; (<b>b</b>) system-wide access status with reconfiguration.</p>
Full article ">Figure 13
<p>USA: (<b>a</b>) system-wide access status; (<b>b</b>) system-wide access status with reconfiguration.</p>
Full article ">Figure 14
<p>Australia satellite access duration with reconfiguration and its orbit.</p>
Full article ">
19 pages, 22296 KiB  
Article
Detection of Power Line Insulators in Digital Images Based on the Transformed Colour Intensity Profiles
by Michał Tomaszewski, Rafał Gasz and Jakub Osuchowski
Sensors 2023, 23(6), 3343; https://doi.org/10.3390/s23063343 - 22 Mar 2023
Cited by 4 | Viewed by 2230
Abstract
Proper maintenance of the electricity infrastructure requires periodic condition inspections of power line insulators, which can be subjected to various damages such as burns or fractures. The article includes an introduction to the problem of insulator detection and a description of various currently [...] Read more.
Proper maintenance of the electricity infrastructure requires periodic condition inspections of power line insulators, which can be subjected to various damages such as burns or fractures. The article includes an introduction to the problem of insulator detection and a description of various currently used methods. Afterwards, the authors proposed a new method for the detection of the power line insulators in digital images by applying selected signal analysis and machine learning algorithms. The insulators detected in the images can be further assessed in depth. The data set used in the study consists of images acquired by an Unmanned Aerial Vehicle (UAV) during its overflight along a high-voltage line located on the outskirts of the city of Opole, Opolskie Voivodeship, Poland. In the digital images, the insulators were placed against different backgrounds, for example, sky, clouds, tree branches, elements of power infrastructure (wires, trusses), farmland, bushes, etc. The proposed method is based on colour intensity profile classification on digital images. Firstly, the set of points located on digital images of power line insulators is determined. Subsequently, those points are connected using lines that depict colour intensity profiles. These profiles were transformed using the Periodogram method or Welch method and then classified with Decision Tree, Random Forest or XGBoost algorithms. In the article, the authors described the computational experiments, the obtained results and possible directions for further research. In the best case, the proposed solution achieved satisfactory efficiency (F1 score = 0.99). Promising classification results indicate the possibility of the practical application of the presented method. Full article
Show Figures

Figure 1

Figure 1
<p>Consecutive steps of the proposed method.</p>
Full article ">Figure 2
<p>Transformation of the colour intensity profiles determined between individual points (vectors with different lengths depending on the distance between pairs of points) into feature vectors of equal length.</p>
Full article ">Figure 3
<p>Example images of power line insulators from the prepared data set.</p>
Full article ">Figure 4
<p>The value of the <span class="html-italic">F1 score</span> parameter depending on the <math display="inline"><semantics> <mrow> <mi>n</mi> <mi>f</mi> <mi>f</mi> <mi>t</mi> </mrow> </semantics></math> value for various transformation methods and classifiers as well as three channels analysed separately (Red—(<b>A</b>), Green—(<b>B</b>), Blue—(<b>C</b>)).</p>
Full article ">Figure 5
<p>The value of the <span class="html-italic">F1 score</span> parameter depending on the <math display="inline"><semantics> <mrow> <mi>n</mi> <mi>f</mi> <mi>f</mi> <mi>t</mi> </mrow> </semantics></math> value for various transformation methods and classifiers-results obtained for three connected channels (R + G + B).</p>
Full article ">Figure 6
<p>Transformation time (<b>A</b>) and classification time (<b>B</b>) depending on <math display="inline"><semantics> <mrow> <mi>n</mi> <mi>f</mi> <mi>f</mi> <mi>t</mi> </mrow> </semantics></math> values for various transformation methods and classifiers.</p>
Full article ">Figure 7
<p>The learning time of classifiers depends on the <math display="inline"><semantics> <mrow> <mi>n</mi> <mi>f</mi> <mi>f</mi> <mi>t</mi> </mrow> </semantics></math> values for various transformation methods and classifiers.</p>
Full article ">
15 pages, 5288 KiB  
Article
Stiffness Considerations for a MEMS-Based Weighing Cell
by Karin Wedrich, Valeriya Cherkasova, Vivien Platl, Thomas Fröhlich and Steffen Strehle
Sensors 2023, 23(6), 3342; https://doi.org/10.3390/s23063342 - 22 Mar 2023
Cited by 3 | Viewed by 1914
Abstract
In this paper, a miniaturized weighing cell that is based on a micro-electro-mechanical-system (MEMS) is discussed. The MEMS-based weighing cell is inspired by macroscopic electromagnetic force compensation (EMFC) weighing cells and one of the crucial system parameters, the stiffness, is analyzed. The system [...] Read more.
In this paper, a miniaturized weighing cell that is based on a micro-electro-mechanical-system (MEMS) is discussed. The MEMS-based weighing cell is inspired by macroscopic electromagnetic force compensation (EMFC) weighing cells and one of the crucial system parameters, the stiffness, is analyzed. The system stiffness in the direction of motion is first analytically evaluated using a rigid body approach and then also numerically modeled using the finite element method for comparison purposes. First prototypes of MEMS-based weighing cells were successfully microfabricated and the occurring fabrication-based system characteristics were considered in the overall system evaluation. The stiffness of the MEMS-based weighing cells was experimentally determined by using a static approach based on force-displacement measurements. Considering the geometry parameters of the microfabricated weighing cells, the measured stiffness values fit to the calculated stiffness values with a deviation from −6.7 to 3.8% depending on the microsystem under test. Based on our results, we demonstrate that MEMS-based weighing cells can be successfully fabricated with the proposed process and in principle be used for high-precision force measurements in the future. Nevertheless, improved system designs and read-out strategies are still required. Full article
(This article belongs to the Topic MEMS Sensors and Resonators)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Schematic illustration of the working principle of the MEMS-based weighing balance including the mechanical parts (black), the electrical parts and the electrical periphery (red). (1) upper lever, (2) lower lever, (3) shuttle, (4) coupling element, (5) transmission lever, <span class="html-italic">V<sub>s</sub></span> voltage of the sensor signal, <span class="html-italic">V<sub>A</sub></span> Voltage to actuate the shuttle in balance; (<b>b</b>) Photo-stacked overview image of the microfabricated MEMS-based weighing cell.</p>
Full article ">Figure 2
<p>Schematic illustration of a single flexure hinge with indicated elastic deformation as used for the calculation and geometric parameters: maximum height of the hinge <span class="html-italic">H</span>, minimal hinge size <span class="html-italic">h</span>, width (out-of-plane thickness) of the hinge <span class="html-italic">w</span>, length of the hinge <span class="html-italic">l</span>, bending angle <span class="html-italic">φ</span>, force <span class="html-italic">F</span> and momentum <span class="html-italic">M</span>.</p>
Full article ">Figure 3
<p>Simplified schematic of the MEMS-based weighing cell that was used for the analytical approach. Black: unloaded state, dashed grey: loaded/deflected state.</p>
Full article ">Figure 4
<p>(<b>a</b>) Illustration of sections of the meshed FEM model (software ANSYS); (<b>b</b>) weighing cell in an unloaded and a loaded state with the force <span class="html-italic">F<sub>y</sub></span>; (<b>c</b>) FEM simulation of an unloaded hinge G and the principal stress distribution if the hinge is loaded with a force <span class="html-italic">F<sub>y</sub></span>.</p>
Full article ">Figure 5
<p>Schematic illustration of the workflow for the fabrication for a MEMS-based weighing cell based on a silicon-on-insulator substrate.</p>
Full article ">Figure 6
<p>Illustration of the direct force measurement device used for measuring the stiffness of the MEMS-based weighing cell: (1) slit aperture; (2) beam balance; (3) loading button; (4) MEMS-based weighing cell chip; (5) joint; (6) permanent magnet; (7) coil; (8) deflection mirror; (9) interferometer.</p>
Full article ">Figure 7
<p>Force-displacement measurement device used for the stiffness measurement of the MEMS based weighing cell (<b>a</b>) overall with a magnification to the MEMS holder and the loading button; (<b>b</b>) side view from the camera used for positioning (<b>c</b>) front view with the MEMS.</p>
Full article ">Figure 8
<p>(<b>a</b>) scanning electron microscopy (SEM) images showing a flexure hinge, (<b>b</b>) exemplary device layer sidewall image as recorded by laser-scanning microcopy (LSM), and (<b>c</b>) an according topography profile, (<b>d</b>) SEM image of the cross section of a cutted flexure hinge with a designed height of <span class="html-italic">h</span> = 7 µm, (<b>e</b>) SEM image of the cross section with a designed height of <span class="html-italic">h</span> = 5 µm.</p>
Full article ">Figure 9
<p>Dependence of the etching quality on the single flexure hinge stiffness, (<b>a</b>) in direction of motion and (<b>b</b>) in out-of-plane direction.</p>
Full article ">Figure 10
<p>(<b>a</b>) Recorded compensation current signal from one cycle with 10 s integration time; (<b>b</b>) corresponding measured displacement signal MEMS of one calibration cycle with 10 s integration time; (<b>c</b>) Force-displacement characteristics per cycle; (<b>d</b>) long-term measurement of MEMS stiffness of the system W 3-1 C1.</p>
Full article ">Figure 11
<p>Eccentric test, (<b>left</b>): geometrical illustration of probing area, (<b>right</b>): stiffness measurement results in direction of motion of system W 3-1 C1.</p>
Full article ">Figure 12
<p>Stiffness measured in the direction of motion compared to the calculated analytical values. (<b>Left</b>): values of the stiffness results, (<b>right</b>): percentage representation of the deviation from the measured stiffness (green) to the analytical stiffness with ideal geometry (dark blue) and to the calculations using the adjusted geometry (light blue).</p>
Full article ">
15 pages, 4184 KiB  
Article
Fault Voiceprint Signal Diagnosis Method of Power Transformer Based on Mixup Data Enhancement
by Shuting Wan, Fan Dong, Xiong Zhang, Wenbo Wu and Jialu Li
Sensors 2023, 23(6), 3341; https://doi.org/10.3390/s23063341 - 22 Mar 2023
Cited by 5 | Viewed by 2176
Abstract
A voiceprint signal as a non-contact test medium has a broad application prospect in power-transformer operation condition monitoring. Due to the high imbalance in the number of fault samples, when training the classification model, the classifier is prone to bias to the fault [...] Read more.
A voiceprint signal as a non-contact test medium has a broad application prospect in power-transformer operation condition monitoring. Due to the high imbalance in the number of fault samples, when training the classification model, the classifier is prone to bias to the fault category with a large number of samples, resulting in poor prediction performance of other fault samples, and affecting the generalization performance of the classification system. To solve this problem, a method of power-transformer fault voiceprint signal diagnosis based on Mixup data enhancement and a convolution neural network (CNN) is proposed. First, the parallel Mel filter is used to reduce the dimension of the fault voiceprint signal to obtain the Mel time spectrum. Then, the Mixup data enhancement algorithm is used to reorganize the generated small number of samples, effectively expanding the number of samples. Finally, CNN is used to classify and identify the transformer fault types. The diagnosis accuracy of this method for a typical unbalanced fault of a power transformer can reach 99%, which is superior to other similar algorithms. The results show that this method can effectively improve the generalization ability of the model and has good classification performance. Full article
(This article belongs to the Special Issue Advanced Sensing for Mechanical Vibration and Fault Diagnosis)
Show Figures

Figure 1

Figure 1
<p>Voiceprint time spectrum drawing process.</p>
Full article ">Figure 2
<p>Mel filter bank.</p>
Full article ">Figure 3
<p>Mel time spectrum of a typical fault: (<b>a</b>) Normal; (<b>b</b>) Short-circuit impulse; (<b>c</b>) Partial discharge; (<b>d</b>) DC bias.</p>
Full article ">Figure 4
<p>Mixup enhancement diagram: (<b>a</b>) Normal; (<b>b</b>) Short-circuit impulse; (<b>c</b>) Partial discharge; (<b>d</b>) DC bias.</p>
Full article ">Figure 5
<p>Diagnostic flowchart.</p>
Full article ">Figure 6
<p>Field test environment.</p>
Full article ">Figure 7
<p>Diagnostic accuracy of different models. (<b>a</b>) Mel-Mixup; (<b>b</b>) Mel-Basic data enhancement; (<b>c</b>) Time-frequency-Mixup; (<b>d</b>) Time-frequency-Basic data enhancement.</p>
Full article ">Figure 8
<p>Fault classification confusion matrix. (<b>a</b>) Mel-Mixup; (<b>b</b>) Mel-Basic data enhancement; (<b>c</b>) Time-frequency-Mixup; (<b>d</b>) Time-frequency-Basic data enhancement.</p>
Full article ">Figure 9
<p>Comparison of diagnosis results of different models.</p>
Full article ">Figure 10
<p>Loss curve (<b>a</b>) with data enhancement; (<b>b</b>) no data enhancement.</p>
Full article ">Figure 11
<p>Classification accuracy performance of different methods.</p>
Full article ">
19 pages, 5956 KiB  
Article
Bilateral Cross-Modal Fusion Network for Robot Grasp Detection
by Qiang Zhang and Xueying Sun
Sensors 2023, 23(6), 3340; https://doi.org/10.3390/s23063340 - 22 Mar 2023
Cited by 3 | Viewed by 2109
Abstract
In the field of vision-based robot grasping, effectively leveraging RGB and depth information to accurately determine the position and pose of a target is a critical issue. To address this challenge, we proposed a tri-stream cross-modal fusion architecture for 2-DoF visual grasp detection. [...] Read more.
In the field of vision-based robot grasping, effectively leveraging RGB and depth information to accurately determine the position and pose of a target is a critical issue. To address this challenge, we proposed a tri-stream cross-modal fusion architecture for 2-DoF visual grasp detection. This architecture facilitates the interaction of RGB and depth bilateral information and was designed to efficiently aggregate multiscale information. Our novel modal interaction module (MIM) with a spatial-wise cross-attention algorithm adaptively captures cross-modal feature information. Meanwhile, the channel interaction modules (CIM) further enhance the aggregation of different modal streams. In addition, we efficiently aggregated global multiscale information through a hierarchical structure with skipping connections. To evaluate the performance of our proposed method, we conducted validation experiments on standard public datasets and real robot grasping experiments. We achieved image-wise detection accuracy of 99.4% and 96.7% on Cornell and Jacquard datasets, respectively. The object-wise detection accuracy reached 97.8% and 94.6% on the same datasets. Furthermore, physical experiments using the 6-DoF Elite robot demonstrated a success rate of 94.5%. These experiments highlight the superior accuracy of our proposed method. Full article
(This article belongs to the Special Issue Multi-Modal Image Processing Methods, Systems, and Applications)
Show Figures

Figure 1

Figure 1
<p>Bilateral cross-modal fusion network architecture.</p>
Full article ">Figure 2
<p>Input data feature extraction diagram using RSM.</p>
Full article ">Figure 3
<p>The structure of the MIM module.</p>
Full article ">Figure 4
<p>LMHSA block for RGB and Depth features extraction.</p>
Full article ">Figure 5
<p>Lightweight multi-head self-attention block.</p>
Full article ">Figure 6
<p>IRFFN block diagram.</p>
Full article ">Figure 7
<p>LMHCA block for fused feature extraction.</p>
Full article ">Figure 8
<p>Multi-head cross-attention block diagram.</p>
Full article ">Figure 9
<p>Channel interaction module.</p>
Full article ">Figure 10
<p>Physical experiment conditions. Experiment instruments include a Femto-W RGB-D camera, an EC-66 collaborative robot, a parallel gripper, and some objects to be grasped.</p>
Full article ">Figure 11
<p>Experiment results of the algorithms proposed by Kumra et al. [<a href="#B8-sensors-23-03340" class="html-bibr">8</a>] in 2020, Wang et al. [<a href="#B6-sensors-23-03340" class="html-bibr">6</a>] in 2022 and our method on Cornell dataset. The 1st and 2nd columns are RGB image and depth images. The 3rd column shows grasp detection results. The last three columns illustrate the quality, angle, and width heatmaps.</p>
Full article ">Figure 12
<p>Experiment results of algorithms proposed by Kumra et al. [<a href="#B8-sensors-23-03340" class="html-bibr">8</a>] in 2020, Wang et al. [<a href="#B6-sensors-23-03340" class="html-bibr">6</a>] in 2022 and our method on Jacquard dataset.</p>
Full article ">Figure 13
<p>Architectures in ablation experiments: (<b>a</b>) has no MIM blocks; and (<b>b</b>) has no CIM blocks.</p>
Full article ">Figure 14
<p>Four grasp stages in the physical experiment: (<b>a</b>) shows the initial position and posture of EC-66 robot. In this stage, grasp detection is performed. After detection, the parallel gripper moves to the effective grasping position, which can be seen in (<b>b</b>). The gripper then grasps the target, which is shown in (<b>c</b>). The robot completes the target grasping task in (<b>d</b>) finally.</p>
Full article ">Figure 15
<p>Detection results of the physical experiment.</p>
Full article ">Figure 15 Cont.
<p>Detection results of the physical experiment.</p>
Full article ">
25 pages, 10766 KiB  
Review
Fluorescence Methods for the Detection of Bioaerosols in Their Civil and Military Applications
by Mirosław Kwaśny, Aneta Bombalska, Miron Kaliszewski, Maksymilian Włodarski and Krzysztof Kopczyński
Sensors 2023, 23(6), 3339; https://doi.org/10.3390/s23063339 - 22 Mar 2023
Cited by 6 | Viewed by 3098
Abstract
The article presents the history of the development and the current state of the apparatus for the detection of interferents and biological warfare simulants in the air with the laser-induced fluorescence (LIF) method. The LIF method is the most sensitive spectroscopic method and [...] Read more.
The article presents the history of the development and the current state of the apparatus for the detection of interferents and biological warfare simulants in the air with the laser-induced fluorescence (LIF) method. The LIF method is the most sensitive spectroscopic method and also enables the measurement of single particles of biological aerosols and their concentration in the air. The overview covers both the on-site measuring instruments and remote methods. The spectral characteristics of the biological agents, steady-state spectra, excitation–emission matrices, and their fluorescence lifetimes are presented. In addition to the literature, we also present our own detection systems for military applications. Full article
(This article belongs to the Special Issue Advances in Optical Sensing, Instrumentation and Systems)
Show Figures

Figure 1

Figure 1
<p>Sizes of the organisms and their interferents.</p>
Full article ">Figure 2
<p>Scheme of the system for measuring the spectra of single particles.</p>
Full article ">Figure 3
<p>EX-EM matrices of the bacteria and their interferents.</p>
Full article ">Figure 4
<p>EM-EX matrices of the technical spores.</p>
Full article ">Figure 5
<p>PCA of the EM-EX of the biological materials (description as in <a href="#sensors-23-03339-t001" class="html-table">Table 1</a>).</p>
Full article ">Figure 6
<p>Fluorescence decay curves of NATA and coenzyme NADH.</p>
Full article ">Figure 7
<p>Fluorescence decay characteristics for vegetative forms of bacteria (S.E—<span class="html-italic">Staphyllococcus epidermis</span>, E.Cl—<span class="html-italic">Enterobacter cloacae</span>, E.C—<span class="html-italic">Escerica coli</span>, B.S.—<span class="html-italic">Bacillus subtilis</span>, P.F.—<span class="html-italic">Pseudomonas Fluorescens</span>).</p>
Full article ">Figure 8
<p>Particle size and size distribution and the SEM image of the chosen particles.</p>
Full article ">Figure 9
<p>Aerosol particle characteristics measured with a UV-APS analyzer.</p>
Full article ">Figure 10
<p>The single-shot spectra from individual particles.</p>
Full article ">Figure 11
<p>The idea of single particle measurements.</p>
Full article ">Figure 12
<p>Construction of the particle concentrator and view of the chamber.</p>
Full article ">Figure 13
<p>Schematic of the OPG laser.</p>
Full article ">Figure 14
<p>Schematic diagram of the four-channel bioanalyzer: F1–F4—optical filters.</p>
Full article ">Figure 15
<p>Emission characteristics of a single particle of BG.</p>
Full article ">
23 pages, 20552 KiB  
Article
ABANICCO: A New Color Space for Multi-Label Pixel Classification and Color Analysis
by Laura Nicolás-Sáenz, Agapito Ledezma, Javier Pascau and Arrate Muñoz-Barrutia
Sensors 2023, 23(6), 3338; https://doi.org/10.3390/s23063338 - 22 Mar 2023
Cited by 5 | Viewed by 2966
Abstract
Classifying pixels according to color, and segmenting the respective areas, are necessary steps in any computer vision task that involves color images. The gap between human color perception, linguistic color terminology, and digital representation are the main challenges for developing methods that properly [...] Read more.
Classifying pixels according to color, and segmenting the respective areas, are necessary steps in any computer vision task that involves color images. The gap between human color perception, linguistic color terminology, and digital representation are the main challenges for developing methods that properly classify pixels based on color. To address these challenges, we propose a novel method combining geometric analysis, color theory, fuzzy color theory, and multi-label systems for the automatic classification of pixels into 12 conventional color categories, and the subsequent accurate description of each of the detected colors. This method presents a robust, unsupervised, and unbiased strategy for color naming, based on statistics and color theory. The proposed model, “ABANICCO” (AB ANgular Illustrative Classification of COlor), was evaluated through different experiments: its color detection, classification, and naming performance were assessed against the standardized ISCC–NBS color system; its usefulness for image segmentation was tested against state-of-the-art methods. This empirical evaluation provided evidence of ABANICCO’s accuracy in color analysis, showing how our proposed model offers a standardized, reliable, and understandable alternative for color naming that is recognizable by both humans and machines. Hence, ABANICCO can serve as a foundation for successfully addressing a myriad of challenges in various areas of computer vision, such as region characterization, histopathology analysis, fire detection, product quality prediction, object description, and hyperspectral imaging. Full article
(This article belongs to the Special Issue Recent Trends and Advances in Color and Spectral Sensors)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Color theory bases: (<b>A</b>) representation of color wheel; the color wheel is obtained from the primary colors (P) Red, Yellow, and Blue; by blending the primary colors, we get the secondary (S) colors, Green, Purple, and Orange; finally, by blending the secondary colors, we obtain the tertiary (T) colors, Yellow-Green, Blue-Green, Blue-Violet, and Red-Violet; (<b>B</b>) depiction of the concept of tints, shades, and tones with the color wheel in A; in every wheel, each ring going outward shows a 25% increase in added white for tints, black for shades, and gray for tones; these show the near-achromatic colors. (<b>C</b>) the four main color spaces used in digital imaging—RGB, YCbCr, HSV, and CIELAB; (<b>D</b>) color picker results displaying the colors within the shapes marked in the shades (s)- and tones (t)-modified color wheels; the circles show the effect of adding black (shades) or gray (tones) to orange; the triangle shows a dark shade of green called forest green; the obtained results show that what we understand as brown is not a pure color but rather shades of pure colors ranging from red to warm yellow.</p>
Full article ">Figure 2
<p>The two main steps of ABANICCO: (<b>A</b>) geometric knowledge generation; we used color theory to identify in the color wheel the localization of the different pure hues, shades, and tints within the reduced AB space of CIELAB; applying geometry, we used bisectors as the best boundaries between pure hues; by these means, we obtained a discrete polar color space divided into 12 different color categories—Pink, Red, Red-Orange, Yellow-Orange, Yellow, Green, Teal, Blue, Ultramarine, Purple, Brown, and Achromatic; the first 10 categories described hue, depending on the angle; the last two described chromaticity, and depended on the radius; (<b>B</b>) fuzzy space definition and multi-label classification for color description; we formalized fuzzy rules to translate the obtained discrete color polar space back into a continuous space that better mirrored human linguistics; based on the bisectors employed for the discretization of the space, we were able to define areas of absolute chromogen certainty and gradients of membership to the adjacent colors on both sides, which was done for both the radius (accounting for chromaticity) and the angle (accounting for hue); finally, with color theory, fuzzy membership functions, and multi-label classification, we assigned multiple, non-exclusive labels to each detected color, for accurate naming and description.</p>
Full article ">Figure 3
<p>Evaluation of our method with ISCC–NBS system of color categories: (<b>A</b>) Level 2 of the ISCC–NBS system, with each color’s name superimposed over its corresponding square; (<b>B</b>) ABANICCO classification of image (<b>A</b>) into the resulting 12 categories.</p>
Full article ">Figure 4
<p>Comparison of segmentation by our method (ABANICCO) versus fuzzy method in [<a href="#B41-sensors-23-03338" class="html-bibr">41</a>]: (<b>A</b>) original natural image with a flag waving; (<b>B</b>) fuzzy color space built from extracted prototypes, using the fuzzy method; (<b>C</b>) row of results (red, blue, and white mapping) of the segmentation, using supervised prototypes with the proposal in [<a href="#B41-sensors-23-03338" class="html-bibr">41</a>]; this technique used membership degrees to represent the segmentation; thus, lower saturation corresponded to lower membership certainty; (<b>D</b>) row of results (red, blue, and white category) of the segmentation using ABANICCO; the green and orange squares show areas where our method failed to segment the stripes accurately; (<b>E</b>) colors within the red, blue, and light categories found by ABANICCO; the stars mark the color of the areas of failed segmentation in (<b>D</b>); the green (orange) star marks the color of the area within the green (orange) rectangle; (<b>F</b>) row of segmentation results using ABANICCO after adjusting the boundaries; (<b>G</b>) polar description of the image with the original boundaries in gray and the revised boundaries shown with dotted black lines; (<b>H</b>) overlap of the masks obtained in row F and those obtained by [<a href="#B42-sensors-23-03338" class="html-bibr">42</a>]; the different shades of pink indicate where the method in [<a href="#B42-sensors-23-03338" class="html-bibr">42</a>] output an uncertain membership to that class—the stronger the pink, the lower the membership. In these images, we can see how their method fails in areas where the differences in illumination changed the expected colors: dark red (brown) and dark white (gray).</p>
Full article ">Figure 5
<p>Comparison of segmentation with our method to fuzzy method in [<a href="#B41-sensors-23-03338" class="html-bibr">41</a>], using a simplified version of the image shown in <a href="#sensors-23-03338-f001" class="html-fig">Figure 1</a>: (<b>A</b>) the middle rectangle shows the unraveled color wheel; the rectangles above (below) add an increasing quantity of black (gray) to represent shades (tones); (<b>B</b>) results of classification using fuzzy color spaces with three arbitrary classes; the lack of labels, and the gradual transitions between colors, resulted in membership maps with small areas of absolute membership, and unclear class separations; (<b>C</b>) results of color classification using ABANICCO; our method automatically divided the image into 12 classes with clear boundaries, corresponding to the main hues present plus the near-achromatic brown shades and tones and the achromatic white and black (empty for this particular image).</p>
Full article ">Figure 6
<p>The ITTP-Close experimental dataset: (<b>A</b>) RGB images present in the dataset; (<b>B</b>) the corresponding groundtruths; (<b>C</b>) a closer look at the different colors in one area, as defined by the ground truth.</p>
Full article ">Figure 7
<p>Segmentation of sample image BONCH-01 of the ITTP-Close experimental dataset, using class remodeling of ABANICCO’s detected colors as clustering-based image segmentation.</p>
Full article ">Figure A1
<p>Complete version of the experiment shown in <a href="#sensors-23-03338-f002" class="html-fig">Figure 2</a>A. Top row: from the color wheel, the AB double histogram was created; then, a simplified AB double histogram was computed and skeletonized, to find the color bases; the center of space and the center of the skeleton were marked with a green dot; the branches and the endpoints were marked with a blue and red point, respectively; the dark and the light orange circles indicate the positions of the achromatic (Scale of Grays) and near-achromatic (Brown) areas, respectively. Bottom row: the boundaries between the located color bases were chosen as each angle bisector; the final plot shows the proposed discrete polar color space.</p>
Full article ">Figure A2
<p>Segmentations obtained after using machine learning methods and the bisector approach to find the boundaries between the color bases. This evaluation was carried out in three rectangles depicting the concept of color wheel shadows and tones. Rectangle 1 shows the full-color wheel. Rectangle 3 above was obtained by adding 75% of black to represent shades. Rectangle 2 below was obtained by adding 75% of gray to represent tones. Above each rectangle are lines measuring the length of the classes obtained by each method. The solid gray lines represent the classification carried out by wKNN. The solid black lines represent the classification by WNN. The color lines represent the classification by our method, with just the bisectors and, above, the simplified final ABANICCO classes. The black rectangle marks the area used for the evaluation in <a href="#sensors-23-03338-f0A3" class="html-fig">Figure A3</a>.</p>
Full article ">Figure A3
<p>Closer look at the comparison of color classification by machine learning methods and our method using the area marked in <a href="#sensors-23-03338-f0A2" class="html-fig">Figure A2</a>. This evaluation was carried out in 7 rectangles depicting the concept of shadows and tones of a portion of the color wheel. Rectangle 1 shows the portion of the color wheel from bluish-green to red. The rectangles above this were obtained by adding an increasing quantity of black to represent shades (Rectangles 2.a, 3.a, and 4.a). The rectangles below were obtained by adding an increasing quantity of gray to represent tones (Rectangles 2.b, 3.b, and 4.b). The figure displays the results of color classification over the region covering the purplish-blue and pinkish-red colors. The region’s exact limits and length depended on the shape of the class created by each method. Above each rectangle, three different lines measured the length of the segmentations by each approach, for this portion of the color wheel: the solid gray line corresponded to the k-nearest neighbors algorithm (KNN); the solid black line to the Wide Neural Network (WNN); the tri-color line to our method. A solid color line represents both Machine Learning methods, because they failed to create appropriate boundaries between blues, purples, and pinks. By contrast, our method is represented using three colors, because it obtained three different classes in all the experiments, regardless of tonality. To both sides of these lines are arrows indicating the classes obtained before and after this section; the actual segmentation performed by the three methods on rectangles 4.a, 1, and 4.b is shown to their right. The separation between classes of our method is marked with white lines for clarity.</p>
Full article ">
11 pages, 7348 KiB  
Communication
Optimized Weight Low-Frequency Search Coil Magnetometer for Ground–Airborne Frequency Domain Electromagnetic Method
by Fei Teng, Ye Tong and Bofeng Zou
Sensors 2023, 23(6), 3337; https://doi.org/10.3390/s23063337 - 22 Mar 2023
Cited by 2 | Viewed by 3067
Abstract
The vertical component magnetic field signal in the ground–airborne frequency domain electromagnetic (GAFDEM) method is detected by the air coil sensor, which is parallel to the ground. Unfortunately, the air coil sensor has low sensitivity in the low-frequency band, making it challenging to [...] Read more.
The vertical component magnetic field signal in the ground–airborne frequency domain electromagnetic (GAFDEM) method is detected by the air coil sensor, which is parallel to the ground. Unfortunately, the air coil sensor has low sensitivity in the low-frequency band, making it challenging to detect effective low-frequency signals and causing low accuracy and large error for interpreted deep apparent resistivity in actual detection. This work develops an optimized weight magnetic core coil sensor for GAFDEM. The cupped flux concentrator is used in the sensor to reduce the weight of the sensor while maintaining the magnetic gathering capacity of the core coil. The winding of the core coil is optimized to resemble the shape of a rugby ball, taking full advantage of the magnetic gathering capacity at the core center. Laboratory and field experiment results show that the developed optimized weight magnetic core coil sensor for the GAFDEM method is highly sensitive in the low-frequency band. Therefore, the detection results at depth are more accurate compared with those obtained using existing air coil sensors. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Experimental model of the search coil magnetometer.</p>
Full article ">Figure 2
<p>Equivalent circuit of the search coil (<b>a</b>) and its frequency response (<b>b</b>).</p>
Full article ">Figure 3
<p>Equivalent circuit of the search coil with matching resistor (<b>a</b>) and its frequency response (<b>b</b>).</p>
Full article ">Figure 4
<p><math display="inline"><semantics> <mrow> <msub> <mrow> <mi>μ</mi> </mrow> <mrow> <mi>a</mi> <mi>p</mi> <mi>p</mi> </mrow> </msub> </mrow> </semantics></math> in different flux concentrators: (<b>a</b>) magnetic core with rod core and cupped flux concentrator, (<b>b</b>) magnetic core only with rod core, and (<b>c</b>) magnetic core with rod core and flux concentrator.</p>
Full article ">Figure 5
<p>Transmittance of GAFDEM as a function of the weight of the search coil and the frequency of the search coil.</p>
Full article ">Figure 6
<p>Comparison of magnetic field collection capacity of three core structures.</p>
Full article ">Figure 7
<p>2D axisymmetric GAFDEM of the winding and magnetic core.</p>
Full article ">Figure 8
<p>Sensitivity of air coil and core coil.</p>
Full article ">Figure 9
<p>Overview of sensitivity experiment of core coil.</p>
Full article ">Figure 10
<p>Location of the field experiment.</p>
Full article ">Figure 11
<p>Overview of the field experiment.</p>
Full article ">Figure 12
<p>Comparison of detection performance utilizing different systems: (<b>a</b>) air coil and (<b>b</b>) air coil and core coil.</p>
Full article ">
19 pages, 13346 KiB  
Article
Lightweight SM-YOLOv5 Tomato Fruit Detection Algorithm for Plant Factory
by Xinfa Wang, Zhenwei Wu, Meng Jia, Tao Xu, Canlin Pan, Xuebin Qi and Mingfu Zhao
Sensors 2023, 23(6), 3336; https://doi.org/10.3390/s23063336 - 22 Mar 2023
Cited by 25 | Viewed by 4203
Abstract
Due to their rapid development and wide application in modern agriculture, robots, mobile terminals, and intelligent devices have become vital technologies and fundamental research topics for the development of intelligent and precision agriculture. Accurate and efficient target detection technology is required for mobile [...] Read more.
Due to their rapid development and wide application in modern agriculture, robots, mobile terminals, and intelligent devices have become vital technologies and fundamental research topics for the development of intelligent and precision agriculture. Accurate and efficient target detection technology is required for mobile inspection terminals, picking robots, and intelligent sorting equipment in tomato production and management in plant factories. However, due to the limitations of computer power, storage capacity, and the complexity of the plant factory (PF) environment, the precision of small-target detection for tomatoes in real-world applications is inadequate. Therefore, we propose an improved Small MobileNet YOLOv5 (SM-YOLOv5) detection algorithm and model based on YOLOv5 for target detection by tomato-picking robots in plant factories. Firstly, MobileNetV3-Large was used as the backbone network to make the model structure lightweight and improve its running performance. Secondly, a small-target detection layer was added to improve the accuracy of small-target detection for tomatoes. The constructed PF tomato dataset was used for training. Compared with the YOLOv5 baseline model, the mAP of the improved SM-YOLOv5 model was increased by 1.4%, reaching 98.8%. The model size was only 6.33 MB, which was 42.48% that of YOLOv5, and it required only 7.6 GFLOPs, which was half that required by YOLOv5. The experiment showed that the improved SM-YOLOv5 model had a precision of 97.8% and a recall rate of 96.7%. The model is lightweight and has excellent detection performance, and so it can meet the real-time detection requirements of tomato-picking robots in plant factories. Full article
Show Figures

Figure 1

Figure 1
<p>Attribute visualization results of the dataset in this study: (<b>a</b>) the number of dataset labels, (<b>b</b>) the label ratio of the dataset, (<b>c</b>) the label location of the dataset, (<b>d</b>) the label size of the data.</p>
Full article ">Figure 2
<p>Diagram illustrating dataset annotation using LabelImg.</p>
Full article ">Figure 3
<p>The integrated architecture of SM-YOLOv5 includes a backbone network (in blue) that was replaced with MobileNetv3-Large. The small-target detection layer added based on the original three-layer target detection model is represented by the red box. The FPN and PAN structures (in yellow and cyan boxes, respectively) were supplemented with a small object detection layer to enhance the detection of small targets.</p>
Full article ">Figure 4
<p>Flowchart illustrating the training and detection process of SM-YOLO, with the training phase represented by orange boxes and the detection phase represented by green boxes.</p>
Full article ">Figure 5
<p>Schematic diagram of separable convolution.</p>
Full article ">Figure 6
<p>Comparison of multi-layer detection results. Detection results for (<b>a</b>) large targets, (<b>b</b>) medium targets, (<b>c</b>) small targets, and (<b>d</b>) multi-layer target fusion detection. Borders and text background colors indicate that the recognized classification was “green” or “red” fruit. White circle callouts indicate tomato fruits that were not correctly identified.</p>
Full article ">Figure 7
<p>Training results of different models.</p>
Full article ">Figure 8
<p>Visualization of the results from ablation experiments conducted using YOLOv5, S-YOLOv5, M-YOLOv5, and SM-YOLOv5 methods.</p>
Full article ">
23 pages, 12903 KiB  
Article
Sensor Fusion in Autonomous Vehicle with Traffic Surveillance Camera System: Detection, Localization, and AI Networking
by Muhammad Hasanujjaman, Mostafa Zaman Chowdhury and Yeong Min Jang
Sensors 2023, 23(6), 3335; https://doi.org/10.3390/s23063335 - 22 Mar 2023
Cited by 23 | Viewed by 11685
Abstract
Complete autonomous systems such as self-driving cars to ensure the high reliability and safety of humans need the most efficient combination of four-dimensional (4D) detection, exact localization, and artificial intelligent (AI) networking to establish a fully automated smart transportation system. At present, multiple [...] Read more.
Complete autonomous systems such as self-driving cars to ensure the high reliability and safety of humans need the most efficient combination of four-dimensional (4D) detection, exact localization, and artificial intelligent (AI) networking to establish a fully automated smart transportation system. At present, multiple integrated sensors such as light detection and ranging (LiDAR), radio detection and ranging (RADAR), and car cameras are frequently used for object detection and localization in the conventional autonomous transportation system. Moreover, the global positioning system (GPS) is used for the positioning of autonomous vehicles (AV). These individual systems’ detection, localization, and positioning efficiency are insufficient for AV systems. In addition, they do not have any reliable networking system for self-driving cars carrying us and goods on the road. Although the sensor fusion technology of car sensors came up with good efficiency for detection and location, the proposed convolutional neural networking approach will assist to achieve a higher accuracy of 4D detection, precise localization, and real-time positioning. Moreover, this work will establish a strong AI network for AV far monitoring and data transmission systems. The proposed networking system efficiency remains the same on under-sky highways as well in various tunnel roads where GPS does not work properly. For the first time, modified traffic surveillance cameras have been exploited in this conceptual paper as an external image source for AV and anchor sensing nodes to complete AI networking transportation systems. This work approaches a model that solves AVs’ fundamental detection, localization, positioning, and networking challenges with advanced image processing, sensor fusion, feathers matching, and AI networking technology. This paper also provides an experienced AI driver concept for a smart transportation system with deep learning technology. Full article
(This article belongs to the Special Issue Advances in Intelligent Transportation Systems Based on Sensor Fusion)
Show Figures

Figure 1

Figure 1
<p>Traffic surveillance road camera as an external image source and anchor node for exact detection and AI networking system.</p>
Full article ">Figure 2
<p>No automation to full automation levels and conditions of road vehicles where the zero level has no automation and the fifth level has a full automation system.</p>
Full article ">Figure 3
<p>Surround sensing of an autonomous vehicle with different types of cameras, RADARs, LiDAR, and ultrasonic sensors.</p>
Full article ">Figure 4
<p>Detection capacity graphical view of AVs’ camera, RADAR, and LiDAR sensors with fusion detection.</p>
Full article ">Figure 5
<p>Centralized, decentralized, and distributed types of fusion technologies used for autonomous systems design. Distributed fusion technology has been used in the proposed system.</p>
Full article ">Figure 6
<p>The summarized flow chart of sensor fusion technologies in AVs.</p>
Full article ">Figure 7
<p>Proposed exact 4D detection and AI networking model for AV with traffic surveillance camera and car sensors.</p>
Full article ">Figure 8
<p>Proposed real-time positioning, localizing, and monitoring system of AVs by anchor node monitoring system.</p>
Full article ">Figure 9
<p>Proposed AI multi-networking technology for AV with modified traffic surveillance camera system.</p>
Full article ">Figure 10
<p>Proposed localization model of AV’s positioning by multi-anchor nodes traffic surveillance system.</p>
Full article ">Figure 11
<p>Convolutional neural networking model for DL and AI processing.</p>
Full article ">Figure 12
<p>AI algorithm and CNNs architectural block diagram for surveillance camera and car sensors integrated detection system in FSCDS.</p>
Full article ">Figure 13
<p>The 3D detection performance analysis: (<b>a</b>) 3D approximate partial detections by car sensors, and (<b>b</b>) 3D detection improvement with car sensor and surveillance camera view.</p>
Full article ">Figure 14
<p>Comparative ADPA distributed fusion of different sensors. (<b>a</b>) Car camera, LiDAR, and RADAR detection, (<b>b</b>) car camera and advanced RADAR detection, (<b>c</b>) car camera, RADAR, and surveillance camera detection, and (<b>d</b>) FSCDS detection.</p>
Full article ">Figure 15
<p>AV sensors fusion detection accuracy estimation with respect to car sensors.</p>
Full article ">Figure 16
<p>Detection estimation accuracy comparison of AV sensor fusion vs. proposed FSCDS approach.</p>
Full article ">Figure 17
<p>Detection accuracy comparison of fusion detection and FSCDS.</p>
Full article ">Figure 18
<p>Localization accuracy comparison of fusion detection and FSCDS.</p>
Full article ">
15 pages, 5287 KiB  
Article
Autonomous Planning of Discontinuous Terrain-Dependent Crawling for Space Dobby Robots
by Jiabo Jiang, Cheng Wei, Yunfeng Yu and Shengxin Sun
Sensors 2023, 23(6), 3334; https://doi.org/10.3390/s23063334 - 22 Mar 2023
Cited by 2 | Viewed by 1531
Abstract
Complex space missions require more space robotic extravehicular operations required to crawl on spacecraft surfaces with discontinuous features at the graspable point, greatly increasing the difficulty of space robot motion manipulation. Therefore, this paper proposes an autonomous planning method for space dobby robots [...] Read more.
Complex space missions require more space robotic extravehicular operations required to crawl on spacecraft surfaces with discontinuous features at the graspable point, greatly increasing the difficulty of space robot motion manipulation. Therefore, this paper proposes an autonomous planning method for space dobby robots based on dynamic potential fields. This method can realize the autonomous crawling of space dobby robots in discontinuous environments while considering the task objectives and the self-collision problem of robotic arms when crawling. In this method, a hybrid event–time trigger with event triggering as the main trigger is proposed by combining the working characteristics of space dobby robots and improving the gait timing trigger; the dynamic potential field function is designed to adjust the space robot robotic arm grasping point adaptively according to the space robot state. Simulation results verify the effectiveness of the proposed autonomous planning method. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Mechanism diagram of a space dobby robot.</p>
Full article ">Figure 2
<p>Mechanic arm configuration mechanism diagram.</p>
Full article ">Figure 3
<p>Schematic diagram of space robot robotic arm numbering. The numbers are manually calibrated to facilitate the description of relative positional relationships.</p>
Full article ">Figure 4
<p>Time-triggered hybrid trigger. The numbers are manually calibrated to facilitate the description of relative positional relationships, Details are in <a href="#sensors-23-03334-f003" class="html-fig">Figure 3</a>.</p>
Full article ">Figure 5
<p>Event-triggered hybrid triggers. The numbers are manually calibrated to facilitate the description of relative positional relationships, Details are in <a href="#sensors-23-03334-f003" class="html-fig">Figure 3</a>.</p>
Full article ">Figure 6
<p>Schematic diagram of the whole-body controller.</p>
Full article ">Figure 7
<p>Whole-body controller design and overall control flow.</p>
Full article ">Figure 8
<p>Trot gait planning motion state.</p>
Full article ">Figure 9
<p>Dynamic potential field diagram with constraints.</p>
Full article ">Figure 10
<p>Space robot crawling process.</p>
Full article ">Figure 11
<p>Comparison of expected and actual displacement in the X-direction of base truss crawling.</p>
Full article ">Figure 12
<p>Comparison of expected and actual displacement in the Y-direction of base truss crawling.</p>
Full article ">Figure 13
<p>Comparison of expected and actual displacement in the Z-direction of base truss crawling.</p>
Full article ">Figure 14
<p>Deviation of base joist crawling displacement.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop