Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 22, June-2
Previous Issue
Volume 22, May-2
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 22, Issue 11 (June-1 2022) – 345 articles

Cover Story (view full-size image): Rigid grippers generally grasp an object with the point of each finger. The sensor can be embedded at the fingertip to cover this area. Compared to conventional grippers, soft grippers can grasp an object with a larger contact area, which requires a wide range of sensing regions with a high spatial resolution. This paper presents a novel design and development of a low-cost and multi-touch sensor based on capacitive variations. The proposed sensor is composed of a wafer of different layers, silicone layers with electrically conductive ink, and a pressure-sensitive conductive paper sheet. This new sensor is very flexible and easy to fabricate, making it an appropriate choice for soft robot applications. The experiments conducted demonstrated that the sensor measured applied forces and contact points with good precision. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
13 pages, 1965 KiB  
Article
Force and Torque Characterization in the Actuation of a Walking-Assistance, Cable-Driven Exosuit
by Daniel Rodríguez Jorge, Javier Bermejo García, Ashwin Jayakumar, Rafael Lorente Moreno, Rafael Agujetas Ortiz and Francisco Romero Sánchez
Sensors 2022, 22(11), 4309; https://doi.org/10.3390/s22114309 - 6 Jun 2022
Cited by 8 | Viewed by 2842
Abstract
Soft exosuits stand out when it comes to the development of walking-assistance devices thanks to both their higher degree of wearability, lower weight, and price compared to the bulkier equivalent rigid exoskeletons. In cable-driven exosuits, the acting force is driven by cables from [...] Read more.
Soft exosuits stand out when it comes to the development of walking-assistance devices thanks to both their higher degree of wearability, lower weight, and price compared to the bulkier equivalent rigid exoskeletons. In cable-driven exosuits, the acting force is driven by cables from the actuation system to the anchor points; thus, the user’s movement is not restricted by a rigid structure. In this paper, a 3D inverse dynamics model is proposed and integrated with a model for a cable-driven actuation to predict the required motor torque and traction force in cables for a walking-assistance exosuit during gait. Joint torques are to be shared between the user and the exosuit for different design configurations, focusing on both hip and ankle assistance. The model is expected to guide the design of the exosuit regarding aspects such as the location of the anchor points, the cable system design, and the actuation units. An inverse dynamics analysis is performed using gait kinematic data from a public dataset to predict the cable forces and position of the exosuit during gait. The obtained joint reactions and cable forces are compared with those in the literature, and prove the model to be accurate and ready to be implemented in an exosuit control scheme. The results obtained in this study are similar to those found in the literature regarding the walking study itself as well as the forces under which cables operate during gait and the cable position cycle. Full article
Show Figures

Figure 1

Figure 1
<p>Simplified scheme for the considered lower-limb segments.</p>
Full article ">Figure 2
<p>Geometric model for calculating the cable traction forces from the expected joint torques in both the hip and ankle sub-systems.</p>
Full article ">Figure 3
<p>Representation of the normalized joint torques obtained via the proposed model for the selected population, according to anthropometric data in [<a href="#B25-sensors-22-04309" class="html-bibr">25</a>] (blue) and those in [<a href="#B24-sensors-22-04309" class="html-bibr">24</a>] for the “comfortable” gait speed (orange). Average values as solid lines. In dotted lines, ±standard deviation.</p>
Full article ">Figure 4
<p>(<b>a</b>) Cable force obtained using the proposed model for the data in [<a href="#B24-sensors-22-04309" class="html-bibr">24</a>] to assist 30% of the total torque. Average values as solid lines. In dotted lines, ±standard deviation. (<b>b</b>) Representation of the cable force of the hip subsystem according to [<a href="#B5-sensors-22-04309" class="html-bibr">5</a>].</p>
Full article ">Figure 5
<p>(<b>a</b>) Torque at the motor shaft during the walking cycle for the hip subsystem and (<b>b</b>) for the ankle sub-system. Average values as solid lines. In dotted lines, ±standard deviation.</p>
Full article ">Figure 6
<p>(<b>a</b>) Cable extension during the walking cycle for the hip and (<b>b</b>) ankle subsystems. Average values as solid lines. In dotted lines, ±standard deviation.</p>
Full article ">Figure 7
<p>Cable position at the motor with and without stiffness along with <math display="inline"><semantics> <msub> <mi>x</mi> <mi>s</mi> </msub> </semantics></math>.</p>
Full article ">Figure 8
<p>3D model of the exosuit along with the lower segment and torso.</p>
Full article ">
16 pages, 3987 KiB  
Article
Multiplexed Readout for an Experiment with a Large Number of Channels Using Single-Electron Sensitivity Skipper-CCDs
by Claudio R. Chavez, Fernando Chierchie, Miguel Sofo-Haro, Jose Lipovetzky, Guillermo Fernandez-Moroni and Juan Estrada
Sensors 2022, 22(11), 4308; https://doi.org/10.3390/s22114308 - 6 Jun 2022
Cited by 2 | Viewed by 2274
Abstract
This paper presents the implementation of a multiplexed analog readout electronics system that can achieve single-electron counting using Skipper-CCDs with non-destructive readout. The proposed system allows the best performance of the sensors to be maintained, with sub-electron noise-level operation, while maintaining low-bandwidth data [...] Read more.
This paper presents the implementation of a multiplexed analog readout electronics system that can achieve single-electron counting using Skipper-CCDs with non-destructive readout. The proposed system allows the best performance of the sensors to be maintained, with sub-electron noise-level operation, while maintaining low-bandwidth data transfer, a minimum number of analog-to-digital converters (ADC) and low disk storage requirement with zero added multiplexing time, even for the simultaneous operation of thousands of channels. These features are possible with a combination of analog charge pile-up, sample and hold circuits and analog multiplexing. The implementation also aims to use the minimum number of components in circuits to keep compatibility with high-channel-density experiments using Skipper-CCDs for low-threshold particle detection applications. Performance details and experimental results using a sensor with 16 output stages are presented along with a review of the circuit design considerations. Full article
(This article belongs to the Special Issue Semiconducting and Superconducting Detectors)
Show Figures

Figure 1

Figure 1
<p>(<b>Left</b>) Design of the OSCURA Multi-Chip Module (MCM) with 16 sensors mounted on a 150 mm silicon wafer; (<b>Right</b>) OSCURA super-module (SM) with 16 MCMs held by a copper support structure.</p>
Full article ">Figure 2
<p>Block diagram of the OSCURA system with two-stage multiplexing.</p>
Full article ">Figure 3
<p>Simplified schematic of the analog processing chain and the first-stage multiplexer. Switch selection is shown for logic states <b>+int</b> = 1, <b>−int</b> = 1, <b>reset</b> = 1 and <b>SH</b> = 0.</p>
Full article ">Figure 4
<p>Simplified timing diagram of the acquisition sequence. <math display="inline"><semantics> <mrow> <mo>Δ</mo> <msub> <mi>t</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>t</mi> <mi>i</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mo>Δ</mo> <msub> <mi>t</mi> <mn>2</mn> </msub> <mo>=</mo> <msub> <mi>t</mi> <mi>i</mi> </msub> </mrow> </semantics></math> are the periods of inverting and non-inverting integration, respectively; <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>q</mi> </mrow> </semantics></math> represents the increment from the previous DSI result. <b>−int</b>, <b>+int</b>, <b>SH</b> and <b>reset</b> are the digital control signals of the switches. <math display="inline"><semantics> <mrow> <msub> <mi>V</mi> <mrow> <mi>C</mi> <mi>C</mi> <mi>D</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> is the CCD video signal, <math display="inline"><semantics> <mrow> <msub> <mi>V</mi> <mrow> <mi>o</mi> <mi>u</mi> <mi>t</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> is the output of the integrator and <math display="inline"><semantics> <mrow> <msub> <mi>M</mi> <mrow> <mi>o</mi> <mi>u</mi> <mi>t</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> is the output of the analog multiplexer.</p>
Full article ">Figure 5
<p>The 16-channel Skipper-CCD used for experimental verification in the chamber.</p>
Full article ">Figure 6
<p>LTA connected to the 16-channel multiplexed analog front-end board.</p>
Full article ">Figure 7
<p>Sample images obtained for the 16 channels after software de-multiplexing. Cosmic ray events are observed.</p>
Full article ">Figure 8
<p>(<b>Top</b>) Raw signal measurement of the pile-up signal and multiplexer output. The sample of one pixel being measured and pile-up is indicated. The multiplexing is also shown; (<b>Bottom</b>) Zoomed-in region of the top figure including the last pixel sample pile-up and the 16-channel multiplexing.</p>
Full article ">Figure 9
<p>Histogram of pixels in overscan for one channel. A gain of 622<math display="inline"><semantics> <mrow> <mspace width="3.33333pt"/> <mi>A</mi> <mi>D</mi> <mi>U</mi> <mo>/</mo> <msup> <mi>e</mi> <mo>−</mo> </msup> </mrow> </semantics></math> and noise of 0.16<math display="inline"><semantics> <mrow> <mspace width="3.33333pt"/> <msubsup> <mi>e</mi> <mrow> <mi>r</mi> <mi>m</mi> <mi>s</mi> </mrow> <mo>−</mo> </msubsup> </mrow> </semantics></math> were obtained by fitting Gaussian functions to each of the peaks. Zero-electron peak fitting shown with red line.</p>
Full article ">Figure 10
<p>Noise versus number of samples per pixel (<span class="html-italic">N</span>). The stars show the best measurement with Skipper-CCD and LTA controller reported in [<a href="#B10-sensors-22-04308" class="html-bibr">10</a>]. Solid lines show the expected <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>/</mo> <msqrt> <mi>N</mi> </msqrt> </mrow> </semantics></math> reductions, taking a point at <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>400</mn> </mrow> </semantics></math> as the starting point.</p>
Full article ">Figure 11
<p>Oscilloscope measurement of waveforms for serial multiplexing (<b>left</b>) and parallel multiplexing (<b>right</b>) for <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math>. Top trace: video signal at the output of the CCD; middle trace: integrator output with 5 sample pile-up; bottom trace: multiplexer output. For the serial readout, the multiplexing takes place after the integration; for the parallel readout, the multiplexing takes place while the next pixel is being read out.</p>
Full article ">Figure 12
<p>Relative noise: quotient of the noise obtained by serial multiplexing to parallel multiplexing as a function of <span class="html-italic">N</span>. For each value of <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>40</mn> <mo>,</mo> <mn>100</mn> <mo>,</mo> <mn>200</mn> <mo>,</mo> <mn>300</mn> <mo>,</mo> <mn>400</mn> </mrow> </semantics></math>, each dot is the relative noise measured in each channel.</p>
Full article ">Figure 13
<p>Histogram of zero- and one-electron peaks for serial and parallel multiplexing computed for <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>400</mn> </mrow> </semantics></math>.</p>
Full article ">
18 pages, 4739 KiB  
Article
A Combined Sensing System for Intrusion Detection Using Anti-Jamming Random Code Signals
by Hang Xu, Yingxin Li, Cheng Ma, Li Liu, Bingjie Wang and Jingxia Li
Sensors 2022, 22(11), 4307; https://doi.org/10.3390/s22114307 - 6 Jun 2022
Cited by 2 | Viewed by 2456
Abstract
In order to prevent illegal intrusion, theft, and destruction, important places require stable and reliable human intrusion detection technology to maintain security. In this paper, a combined sensing system using anti-jamming random code signals is proposed and demonstrated experimentally to detect the human [...] Read more.
In order to prevent illegal intrusion, theft, and destruction, important places require stable and reliable human intrusion detection technology to maintain security. In this paper, a combined sensing system using anti-jamming random code signals is proposed and demonstrated experimentally to detect the human intruder in the protected area. This sensing system combines the leaky coaxial cable (LCX) sensor and the single-transmitter-double-receivers (STDR) radar sensor. They transmit the orthogonal physical random code signals generated by Boolean chaos as the detection signals. The LCX sensor realizes the early intrusion alarm at the protected area boundary by comparing the correlation traces before and after intrusion. Meanwhile, the STDR radar sensor is used to track the intruder’s moving path inside the protected area by correlation ranging and ellipse positioning, as well as recognizing intruder’s activities by time-frequency analysis, feature extraction, and support vector machine. The experimental results demonstrate that this combined sensing system not only realizes the early alarm and path tracking for the intruder with the 13 cm positioning accuracy, but also recognizes the intruder’s eight activities including squatting, picking up, jumping, waving, walking forward, running forward, walking backward, and running backward with 98.75% average accuracy. Benefiting from the natural randomness and auto-correlation of random code signal, the proposed sensing system is also proved to have a large anti-jamming tolerance of 27.6 dB, which can be used in the complex electromagnetic environment. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Experimental setup of the combined intrusion-detection sensing system.</p>
Full article ">Figure 2
<p>Experimental scene of the intruder entering the protected area.</p>
Full article ">Figure 3
<p>Generation method of the 500 Mbps and 2 Gbps random code signals.</p>
Full article ">Figure 4
<p>(<b>a1</b>,<b>a2</b>) Time-domain waveforms; (<b>b1</b>,<b>b2</b>) power spectrums; (<b>c1</b>,<b>c2</b>) auto-correlation traces of the 500 Mbps and 2 Gbps random code signals; and (<b>d</b>) their cross-correlation trace.</p>
Full article ">Figure 5
<p>Intrusion detection algorithm of the proposed combined sensing system.</p>
Full article ">Figure 6
<p>Measurement principle of early alarm.</p>
Full article ">Figure 7
<p>(<b>a</b>) Intrusion process, and the detection results of (<b>b</b>) early alarm, (<b>c</b>) path tracking, and (<b>d</b>) ranging at position P.</p>
Full article ">Figure 8
<p>Geometries of the intruder’s eight activities.</p>
Full article ">Figure 9
<p>TF diagrams and the corresponding first principal components of (<b>a</b>) squatting, (<b>b</b>) picking up, (<b>c</b>) jumping, (<b>d</b>) waving, (<b>e</b>) walking forward, (<b>f</b>) running forward, (<b>g</b>) walking backward, and (<b>h</b>) running backward.</p>
Full article ">Figure 10
<p>(<b>a</b>) Relationship curve between the PNR and ISR; (<b>b</b>) comparison results of correlation ranging traces without and with the white/colored noise interference when ISR = 27.6 dB.</p>
Full article ">Figure 11
<p>TR diagrams of walking forward (<b>a</b>) without and (<b>b</b>) with the white noise interference when ISR = 27.6 dB.</p>
Full article ">Figure 12
<p>TF diagrams and the corresponding first principal components of (<b>a</b>) squatting, (<b>b</b>) picking up, (<b>c</b>) jumping, (<b>d</b>) waving, (<b>e</b>) walking forward, (<b>f</b>) running forward, (<b>g</b>) walking backward, and (<b>h</b>) running backward when ISR = 27.6 dB.</p>
Full article ">Figure 13
<p>Variation curves of (<b>a</b>) CC and (<b>b</b>) recognition accuracy for eight activities with the increase in ISR.</p>
Full article ">
16 pages, 1983 KiB  
Article
Distributed Ellipsoidal Intersection Fusion Estimation for Multi-Sensor Complex Systems
by Peng Zhang, Shuyu Zhou, Peng Liu and Mengwei Li
Sensors 2022, 22(11), 4306; https://doi.org/10.3390/s22114306 - 6 Jun 2022
Cited by 6 | Viewed by 2031
Abstract
This paper investigates the problem of distributed ellipsoidal intersection (DEI) fusion estimation for linear time-varying multi-sensor complex systems with unknown input disturbances and measurement data transmission delays. For the problem with external unknown input disturbance signals, a non-informative prior distribution is used to [...] Read more.
This paper investigates the problem of distributed ellipsoidal intersection (DEI) fusion estimation for linear time-varying multi-sensor complex systems with unknown input disturbances and measurement data transmission delays. For the problem with external unknown input disturbance signals, a non-informative prior distribution is used to model the problem. A set of independent random variables obeying Bernoulli distribution is also used to describe the situation of measurement data transmission delay caused by network channel congestion, and appropriate buffer areas are added at the link nodes to retrieve the delayed transmission data values. For multi-sensor systems with complex situations, a minimum mean square error (MMSE) local estimator is designed in a Bayesian framework based on the maximum a posteriori (MAP) estimation criterion. In order to deal with the unknown correlations among the local estimators and to select the fusion estimator with lower computational complexity, the fusion estimator is designed using ellipsoidal intersection (EI) fusion technique, and the consistency of the estimator is demonstrated. In this paper, the difference between DEI fusion and distributed covariance intersection (DCI) fusion and centralized fusion estimation is analyzed by a numerical example, and the superiority of the DEI fusion method is demonstrated. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>Distributed fusion estimation of complex systems with a multi-sensor.</p>
Full article ">Figure 2
<p>Results of the minimizing <math display="inline"><semantics> <mi mathvariant="sans-serif">Γ</mi> </semantics></math>, CI fusion, and EI fusion methods for two state ellipsoid estimations.</p>
Full article ">Figure 3
<p>Performance of the Distributed Ellipsoidal Intersection (DEI) fusion estimator in the state estimation.</p>
Full article ">Figure 4
<p>Ellipsoid1 and Ellipsoid2 forms of two locally estimated (<math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> </mrow> </semantics></math>) under the sublevel set <math display="inline"><semantics> <mrow> <msub> <mo>Ɛ</mo> <mrow> <mover accent="true"> <mi>x</mi> <mo>^</mo> </mover> <mo>,</mo> <mi>P</mi> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>Ellipsoidal volume characterized by the fusion of the DEI and DCI results.</p>
Full article ">
15 pages, 2157 KiB  
Article
Multi-Tone Harmonic Balance Optimization for High-Power Amplifiers through Coarse and Fine Models Based on X-Parameters
by Lida Kouhalvandi, Osman Ceylan, Serdar Ozoguz and Ladislau Matekovits
Sensors 2022, 22(11), 4305; https://doi.org/10.3390/s22114305 - 6 Jun 2022
Cited by 2 | Viewed by 2354
Abstract
In this study, we focus on automated optimization design methodologies to concurrently trade off between power gain, output power, efficiency, and linearity specifications in radio frequency (RF) high-power amplifiers (HPAs) through deep neural networks (DNNs). The RF HPAs are highly nonlinear circuits where [...] Read more.
In this study, we focus on automated optimization design methodologies to concurrently trade off between power gain, output power, efficiency, and linearity specifications in radio frequency (RF) high-power amplifiers (HPAs) through deep neural networks (DNNs). The RF HPAs are highly nonlinear circuits where characterizing an accurate and desired amplitude and phase responses to improve the overall performance is not a straightforward process. For this case, we propose a coarse and fine modeling approach based on firstly modeling the involved transistor and then selecting the best configuration of HAP along with optimizing the involved input and output termination networks through DNNs. In the fine phase, we firstly construct the equivalent modeling of the GaN HEMT transistor by using X-parameters. Then in the coarse phase, we utilize hidden layers of the modeled transistor and replace the HPA’s DNN to model the behavior of the selected HPA by using S-parameters. If the suitable accuracy of HPA modeling is not achieved, the hyperparameters of the fine model are improved and re-evaluated in the HPA model. We call the optimization process coarse and fine modeling since the evaluation process is performed from S-parameters to X-parameters. This stage of optimization can ensure modeling the nonlinear HPA design that includes a high number of parameters in an effective way. Furthermore, for accelerating the optimization process, we use the classification DNN for selecting the best topology of HPA for modeling the most suitable configuration at the coarse phase. The proposed modeling strategy results in relatively highly accurate HPA designs that generate post-layouts automatically, where multi-tone harmonic balance specifications are optimized once together without any human interruptions. To validate the modeling approach and optimization process, a 10 W HPA is simulated and measured in the operational frequency band of 1.8 GHz to 2.2 GHz, i.e., the L-band. The measurement results demonstrate a drain efficiency higher than 54% and linear gain performance more than 12.5 dB, with better than 50 dBc adjacent channel power ratio (ACPR) after DPD. Full article
(This article belongs to the Special Issue Micro and Nanodevices for Sensing Technology)
Show Figures

Figure 1

Figure 1
<p>An overview of the proposed coarse and fine optimization method for modeling transistor and optimizing HPA designs with DNNs where multi-objective algorithm is employed.</p>
Full article ">Figure 2
<p>Third-order intermodulation distortion and products.</p>
Full article ">Figure 3
<p>Sequence of achieving accurate number of hidden layers in modeling GaN HEMT transistor.</p>
Full article ">Figure 4
<p>Accuracy prediction of 11 models using the classification DNN.</p>
Full article ">Figure 5
<p>Fabricated EM-based HPA, designed by proposed DNN-based optimization method. Units of each capacitor and resistor are pF and Ohm, respectively; “/” represents width/length in mm unit and “<math display="inline"><semantics> <mfenced open="{" close="}"> <mtable> <mtr> <mtd> <mo>,</mo> </mtd> </mtr> </mtable> </mfenced> </semantics></math>” represents <math display="inline"><semantics> <mrow> <mi>W</mi> <mi>i</mi> <mi>d</mi> <mi>t</mi> <msub> <mi>h</mi> <mn>1</mn> </msub> <mo>,</mo> <mi>W</mi> <mi>i</mi> <mi>d</mi> <mi>t</mi> <msub> <mi>h</mi> <mn>2</mn> </msub> </mrow> </semantics></math> of tapered lines in mm unit.</p>
Full article ">Figure 6
<p>Simulated and measured S-parameters of the optimized HPA.</p>
Full article ">Figure 7
<p>One-tone CW simulated and measured output power, drain efficiency, and power gain at 3 dB gain compression.</p>
Full article ">Figure 8
<p>Power gain and drain efficiency over output power simulated (dash lines) and measured (solid lines) at various frequencies.</p>
Full article ">Figure 9
<p>Measured high and low IMDs with 1 MHz, 5 MHz, and 10 MHz tone spacing values.</p>
Full article ">Figure 10
<p>Measured output spectrum with 20 MHz LTE signal having 10.7 dB PAPR, with and without DPD.</p>
Full article ">
23 pages, 7725 KiB  
Article
A Repair Method for Missing Traffic Data Based on FCM, Optimized by the Twice Grid Optimization and Sparrow Search Algorithms
by Pengcheng Li, Baotian Dong, Sixian Li and Rusi Chu
Sensors 2022, 22(11), 4304; https://doi.org/10.3390/s22114304 - 6 Jun 2022
Cited by 3 | Viewed by 1840
Abstract
Complete traffic sensor data is a significant prerequisite for analyzing the changing rules of traffic flow and formulating traffic control strategies. Nevertheless, the missing traffic data are common in practice. In this study, an improved Fuzzy C-Means algorithm is proposed to repair missing [...] Read more.
Complete traffic sensor data is a significant prerequisite for analyzing the changing rules of traffic flow and formulating traffic control strategies. Nevertheless, the missing traffic data are common in practice. In this study, an improved Fuzzy C-Means algorithm is proposed to repair missing traffic data, and three different repair modes are established according to the correlation of time, space, and attribute value of traffic flow. First, a Twice Grid Optimization (TGO) algorithm is proposed to provide a reliable initial clustering center for the FCM algorithm. Then the Sparrow Search Algorithm (SSA) is used to optimize the fuzzy weighting index m and classification number k of the FCM algorithm. Finally, an experimental test of the traffic sensor data in Shunyi District, Beijing, is employed to verify the effectiveness of the TGO-SSA-FCM. Experimental results showed that the improved algorithm had a better performance than some traditional algorithms, and different data repair modes should be selected under different miss rate conditions. Full article
(This article belongs to the Topic Intelligent Transportation Systems)
Show Figures

Figure 1

Figure 1
<p>Basic information of 3 groups of sensors: (<b>a</b>) Latitude and longitude range of each group of sensors; (<b>b</b>) Specific position of each group of sensors.</p>
Full article ">Figure 2
<p>Input data for three modes. (<b>a</b>) Input data of TR mode; (<b>b</b>) Input data of SR mode; (<b>c</b>) Input data of AR mode.</p>
Full article ">Figure 3
<p>Pearson correlation coefficient of input Matrix. (<b>a</b>) Correlation matrix of TR mode; (<b>b</b>) Correlation matrix of SR mode.</p>
Full article ">Figure 4
<p>Visualization results of AR data with t-SNE.</p>
Full article ">Figure 5
<p>Traditional FCM parameter selection result.</p>
Full article ">Figure 6
<p>Example of TGO algorithm. (<b>a</b>) Data gridding; (<b>b</b>) Grid frequency distribution; (<b>c</b>) Cumulative contribution rate of each grid density; (<b>d</b>) Results of the first grid optimization; (<b>e</b>) Results of the second grid optimization.</p>
Full article ">Figure 7
<p>The selection result of thresholds.</p>
Full article ">Figure 8
<p>The XB index of each cluster center.</p>
Full article ">Figure 9
<p>Absolute error of different schemes at different loss ratios.</p>
Full article ">Figure 10
<p>The RMSE of different schemes on different groups of sensor data. (<b>a</b>) RMSE of the first group of sensors; (<b>b</b>) RMSE of the second group of sensors; (<b>c</b>) RMSE of the third group of sensors.</p>
Full article ">Figure 11
<p>The RA of different schemes on different groups of sensor data. (<b>a</b>) RA of the No.1 sensors; (<b>b</b>) RA of the No.2 sensors; (<b>c</b>) RA of the No.3 sensors.</p>
Full article ">Figure 12
<p>Comparison results of RMSE and RA.</p>
Full article ">
19 pages, 3829 KiB  
Article
Identifying A(s) and β(s) in Single-Loop Feedback Circuits Using the Intermediate Transfer Function Approach
by Gordon Walter Roberts
Sensors 2022, 22(11), 4303; https://doi.org/10.3390/s22114303 - 6 Jun 2022
Viewed by 2392
Abstract
It is common practice to model the input–output behavior of a single-loop feedback circuit using the two parameters, A and β. Such an approach was first proposed by Black to explain the advantages and disadvantages of negative feedback. Extensive theories of system [...] Read more.
It is common practice to model the input–output behavior of a single-loop feedback circuit using the two parameters, A and β. Such an approach was first proposed by Black to explain the advantages and disadvantages of negative feedback. Extensive theories of system behavior (e.g., stability, impedance control) have since been developed by mathematicians and/or control engineers centered around these two parameters. Circuit engineers rely on these insights to optimize the dynamic behavior of their circuits. Unfortunately, no method exists for uniquely identifying A or β in terms of the components of the circuit. Rather, indirect methods, such as the injection method of Middlebrook or the break-the-loop approach proposed by Rosenstark, compute the return ratio RR of the feedback loop and inferred the parameters A and β. While one often assumes that the zeros of (1 + RR) are equal to the zeros of (1 + A × β), i.e., the closed-loop poles are equivalent, this is not true in general. It is the objective of this paper to present an exact method to uniquely identify each feedback parameter, A or β, in terms of the circuit components. Further, this paper will identify the circuit conditions for which the product of A × β leads to the correct closed-loop poles. Full article
Show Figures

Figure 1

Figure 1
<p>The general form of a negative feedback structure, as first proposed by H. Black.</p>
Full article ">Figure 2
<p>The four <span class="underline">noncompliant</span> single-loop feedback topologies incorporated with circuits: (<b>a</b>) voltage-mixing/voltage-sensing, (<b>b</b>) voltage-mixing/current-sensing, (<b>c</b>) current-mixing/current-sensing, and (<b>d</b>) current-mixing/voltage-sensing.</p>
Full article ">Figure 3
<p>Highlighting problem with voltage and current mixing when a source resistance is present. (<b>a</b>) voltage mixing, and (<b>b</b>) current mixing.</p>
Full article ">Figure 4
<p>Three equivalent voltage-mixing arrangements; two are compliant with Black’s single-loop feedback structure: (<b>a</b>) noncompliant, (<b>b</b>) compliant, and (<b>c</b>) compliant.</p>
Full article ">Figure 5
<p>Three equivalent current-mixing arrangements; two are compliant with Black’s single-loop feedback structure: (<b>a</b>) noncompliant, (<b>b</b>) compliant option 1, and (<b>c</b>) compliant option 2.</p>
Full article ">Figure 6
<p>Highlighting the physical difference between an output signal from a circuit with a feedback loop and the signal being sensed by the feedback network. (<b>a</b>) The output voltage is the same signal that is being sensed by the feedback network, and (<b>b</b>) the output voltage is different from the current signal that is being sensed by the feedback network.</p>
Full article ">Figure 7
<p>Including a <b><span class="html-italic">γ</span></b>(s)-block that relates the sense variable <b><span class="html-italic">x<sub>Sen</sub></span></b> to the designated output signal <b><span class="html-italic">x</span><sub>o</sub></b>.</p>
Full article ">Figure 8
<p>Illustrating the intermediate transfer functions associated with a single-loop feedback circuit.</p>
Full article ">Figure 9
<p>Voltage-mixing/voltage-sensing topology: (<b>a</b>) Unity-gain amplifier using an op-amp, and (<b>b</b>) op-amp circuit model with general gain function <span class="html-italic">μ</span>(s). The voltage-mixing signal are highlighted in blue.</p>
Full article ">Figure 10
<p>Inclusion of a feed-in branch <b>α</b>(s) to expand the circuit range of applicability of a single-loop negative feedback system description.</p>
Full article ">Figure 11
<p>A unity-gain amplifier circuit that is to be mapped to the modified single-loop feedback structure of <a href="#sensors-22-04303-f010" class="html-fig">Figure 10</a>. The circuit is the same, but the voltage-mixing variables have been changed.</p>
Full article ">Figure 12
<p>Comparing noncompliant, compliant feedback topologies with the Middlebrook loop injection method for extracting its return ratio: (<b>a</b>) voltage-input voltage-output active filter circuit, (<b>b</b>) evaluating the return ratio of the active filter circuit using Middlebrook’s injection method by replacing the dependent voltage source related to the VCVS with an independent voltage source, (<b>c</b>) Norton equivalent circuit representation that is noncompliant with Black’s feedback topology, and (<b>d</b>) Norton equivalent circuit representation that is compliant.</p>
Full article ">Figure 13
<p>Preparing a current-mixing/current-sensing feedback circuit for feedback parameters isolation: (<b>a</b>) Identifying the feedback loop. (<b>b</b>) As the output voltage is outside the feedback loop of the amplifier, a sensing current has been identified that is inside the loop. (<b>c</b>) Identifying the system variables that form a voltage-mixing loop that is compliant with Black’s topology. (<b>d</b>) A fully compliant circuit arrangement that meets the definition of a current-mixing/current-sensing topology.</p>
Full article ">Figure 14
<p>Different feedback formulation for a CE BJT amplifier with resistive feedback: (<b>a</b>) voltage-mixing/voltage-sensing compliant arrangement with feedback variable highlighted in blue, (<b>b</b>) through a Norton transformation, the CE amplifier is rearranged into an equivalent current-mixing/voltage-sensing compliant topology, (<b>c</b>) interchanging the feedback and error signal designations, and (<b>d</b>) selecting the sense signal as the collector current rather than the collector voltage.</p>
Full article ">
22 pages, 8915 KiB  
Article
A Novel Detection and Multi-Classification Approach for IoT-Malware Using Random Forest Voting of Fine-Tuning Convolutional Neural Networks
by Safa Ben Atitallah, Maha Driss and Iman Almomani
Sensors 2022, 22(11), 4302; https://doi.org/10.3390/s22114302 - 6 Jun 2022
Cited by 37 | Viewed by 4248
Abstract
The Internet of Things (IoT) is prone to malware assaults due to its simple installation and autonomous operating qualities. IoT devices have become the most tempting targets of malware due to well-known vulnerabilities such as weak, guessable, or hard-coded passwords, a lack of [...] Read more.
The Internet of Things (IoT) is prone to malware assaults due to its simple installation and autonomous operating qualities. IoT devices have become the most tempting targets of malware due to well-known vulnerabilities such as weak, guessable, or hard-coded passwords, a lack of secure update procedures, and unsecured network connections. Traditional static IoT malware detection and analysis methods have been shown to be unsatisfactory solutions to understanding IoT malware behavior for mitigation and prevention. Deep learning models have made huge strides in the realm of cybersecurity in recent years, thanks to their tremendous data mining, learning, and expression capabilities, thus easing the burden on malware analysts. In this context, a novel detection and multi-classification vision-based approach for IoT-malware is proposed. This approach makes use of the benefits of deep transfer learning methodology and incorporates the fine-tuning method and various ensembling strategies to increase detection and classification performance without having to develop the training models from scratch. It adopts the fusion of 3 CNNs, ResNet18, MobileNetV2, and DenseNet161, by using the random forest voting strategy. Experiments are carried out using a publicly available dataset, MaleVis, to assess and validate the suggested approach. MaleVis contains 14,226 RGB converted images representing 25 malware classes and one benign class. The obtained findings show that our suggested approach outperforms the existing state-of-the-art solutions in terms of detection and classification performance; it achieves a precision of 98.74%, recall of 98.67%, a specificity of 98.79%, F1-score of 98.70%, MCC of 98.65%, an accuracy of 98.68%, and an average processing time per malware classification of 672 ms. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence for Cyber Security)
Show Figures

Figure 1

Figure 1
<p>Proposed approach for malware detection and multi-classification.</p>
Full article ">Figure 2
<p>The distribution of samples for each class in the Malevis dataset.</p>
Full article ">Figure 3
<p>Malware RGB images from different malware families and the benign class provided by the Malevis dataset.</p>
Full article ">Figure 4
<p>Precision of each malware class resulted from DenseNet161, MobileNetV2, and ResNet18.</p>
Full article ">Figure 5
<p>F1-score of each malware class resulted from DenseNet161, MobileNetV2, and ResNet18.</p>
Full article ">Figure 6
<p>The normalized confusion matrix of the proposed approach.</p>
Full article ">Figure 7
<p>Precision of each malware class using the proposed approach.</p>
Full article ">Figure 8
<p>F1-score of each malware class using the proposed approach.</p>
Full article ">Figure 9
<p>Plots of TP rate versus FP rate of the proposed approach.</p>
Full article ">Figure 10
<p>Zoomed-in version of <a href="#sensors-22-04302-f009" class="html-fig">Figure 9</a>.</p>
Full article ">
15 pages, 772 KiB  
Article
An Analysis of Semicircular Channel Backscattering Interferometry through Ray Tracing Simulations
by Niall M. C. Mulkerns, William H. Hoffmann, Ian D. Lindsay and Henkjan Gersen
Sensors 2022, 22(11), 4301; https://doi.org/10.3390/s22114301 - 6 Jun 2022
Viewed by 2442
Abstract
Recent backscattering interferometry studies utilise a single channel microfluidic system, typically approximately semicircular in cross-section. Here, we present a complete ray tracing model for on-chip backscattering interferometry with a semicircular cross-section, including the dependence upon polarisation and angle of incidence. The full model [...] Read more.
Recent backscattering interferometry studies utilise a single channel microfluidic system, typically approximately semicircular in cross-section. Here, we present a complete ray tracing model for on-chip backscattering interferometry with a semicircular cross-section, including the dependence upon polarisation and angle of incidence. The full model is validated and utilised to calculate the expected fringe patterns and sensitivities observed under both normal and oblique angles of incidence. Comparison with experimental data from approximately semicircular channels using the parameters stated shows that they cannot be explained using a semicircular geometry. The disagreement does not impact on the validity of the experimental data, but highlights that the optical mechanisms behind the various modalities of backscattering interferometry would benefit from clarification. From the analysis presented here, we conclude that for reasons of ease of analysis, data quality, and sensitivity for a given radius, capillary-based backscattering interferometry affords numerous benefits over on-chip backscattering interferometry. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>A</b>): A diagram showing the ray path taken when light is incident perpendicular to a chip with a semicircular channel with intersection number <math display="inline"><semantics> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>. (<b>B</b>): The path of a ray when the incident light is oblique to the chip surface with intersection number <math display="inline"><semantics> <mrow> <mi>i</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>. Positive <span class="html-italic">x</span> is defined to be to the right.</p>
Full article ">Figure 2
<p>A diagram showing examples of a type <span class="html-italic">a</span>, <span class="html-italic">b</span>, and <span class="html-italic">c</span> ray, highlighting the segments of the <span class="html-italic">c</span> ray that correspond to the path lengths sections <math display="inline"><semantics> <msub> <mi>L</mi> <mrow> <mn>0</mn> <mo>−</mo> <mn>5</mn> </mrow> </msub> </semantics></math> as described in the text.</p>
Full article ">Figure 3
<p>Graphs showing the relative amplitudes (where <math display="inline"><semantics> <mrow> <msub> <mi>I</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>) of type <span class="html-italic">c</span> rays for both <span class="html-italic">s</span>- and <span class="html-italic">p</span>-polarised incident light at normal incidence (<math display="inline"><semantics> <mrow> <mi>ψ</mi> <mo>=</mo> <msup> <mn>0</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>, (<b>A</b>)) and oblique incidence (<math display="inline"><semantics> <mrow> <mi>ψ</mi> <mo>=</mo> <msup> <mn>3</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>, (<b>B</b>)). The data here are taken using the standard parameters as defined in the main text. Values of <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>/</mo> <mi>r</mi> </mrow> </semantics></math> between <math display="inline"><semantics> <mrow> <mo>±</mo> <mn>1.5</mn> </mrow> </semantics></math> are simulated to sample the full range of values that give rise to type <span class="html-italic">c</span> rays. The dashed lines in A represent the bounds on a given intersection number, with the central section denoting <math display="inline"><semantics> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> and increasing by 1 upon crossing a line moving outwards. The sudden reduction in amplitude at the edges is due to the rays no longer entering the channel at this angle. The amplitudes of type <span class="html-italic">a</span> and <span class="html-italic">b</span> rays for <span class="html-italic">p</span>-polarised light are omitted due to their similarity with their <span class="html-italic">s</span>-polarised counterparts.</p>
Full article ">Figure 4
<p>A graph showing the optical path length difference of a type <span class="html-italic">c</span> ray for both normal and oblique incidence (<math display="inline"><semantics> <msup> <mn>0</mn> <mo>∘</mo> </msup> </semantics></math> and <math display="inline"><semantics> <msup> <mn>3</mn> <mo>∘</mo> </msup> </semantics></math>, respectively). The difference is defined to be with respect to a ray of <math display="inline"><semantics> <mrow> <mi>ψ</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> (i.e., solving Equation (<a href="#FD18-sensors-22-04301" class="html-disp-formula">18</a>) and subtracting <math display="inline"><semantics> <mrow> <mn>2</mn> <msub> <mi>n</mi> <mn>2</mn> </msub> <mi>r</mi> <mo>+</mo> <mn>2</mn> <msub> <mi>n</mi> <mn>1</mn> </msub> <mi>t</mi> <mo>+</mo> <msub> <mi>n</mi> <mn>0</mn> </msub> <mi>d</mi> </mrow> </semantics></math>). Data were simulated using the parameters as set out in the main text.</p>
Full article ">Figure 5
<p>Graphs showing the interference patterns seen on a detector using the parameters as set out in the main body of text at a distance of <math display="inline"><semantics> <mrow> <mn>1</mn> <mi mathvariant="normal">m</mi> </mrow> </semantics></math> with the angle given from the line of <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>. (<b>A</b>) shows the interference pattern for <span class="html-italic">s</span>-polarised incident light, whereas (<b>B</b>) shows the pattern imaged for <span class="html-italic">p</span>-polarised light.</p>
Full article ">Figure 6
<p>A graph showing the dechirped Fourier transform of the interference pattern seen at normal incidence. A single sharp peak in the Fourier domain is seen here for both <span class="html-italic">s</span>- (<b>A</b>) and <span class="html-italic">p</span>-polarised (<b>B</b>) incident light. A graph showing how the phase of each peak in (<b>A</b>,<b>B</b>) changes as a function of refractive index <math display="inline"><semantics> <msub> <mi>n</mi> <mn>2</mn> </msub> </semantics></math> is shown in (<b>C</b>). All data were taken using the parameters set out in the main text.</p>
Full article ">Figure 7
<p>A graph showing the dechirped Fourier transform of the interference pattern seen at an incident angle of <math display="inline"><semantics> <mrow> <mi>ψ</mi> <mo>=</mo> <mn>3</mn> <msup> <mrow/> <mo>∘</mo> </msup> </mrow> </semantics></math>. A sharp peak in the Fourier domain as seen in <a href="#sensors-22-04301-f006" class="html-fig">Figure 6</a> is also seen here for both <span class="html-italic">s</span>- (<b>A</b>) and <span class="html-italic">p</span>-polarised (<b>B</b>) incident light. A graph showing how the phase of each peak in (<b>A</b>,<b>B</b>) changes as a function of refractive index <math display="inline"><semantics> <msub> <mi>n</mi> <mn>2</mn> </msub> </semantics></math> is shown in (<b>C</b>). All data were taken using the other parameters set out in the main text.</p>
Full article ">
18 pages, 4076 KiB  
Article
Two-Stage Hybrid Model for Efficiency Prediction of Centrifugal Pump
by Yi Liu, Zhaoshun Xia, Hongying Deng and Shuihua Zheng
Sensors 2022, 22(11), 4300; https://doi.org/10.3390/s22114300 - 6 Jun 2022
Cited by 5 | Viewed by 2537
Abstract
Accurately predict the efficiency of centrifugal pumps at different rotational speeds is important but still intractable in practice. To enhance the prediction performance, this work proposes a hybrid modeling method by combining both the process data and knowledge of centrifugal pumps. First, according [...] Read more.
Accurately predict the efficiency of centrifugal pumps at different rotational speeds is important but still intractable in practice. To enhance the prediction performance, this work proposes a hybrid modeling method by combining both the process data and knowledge of centrifugal pumps. First, according to the process knowledge of centrifugal pumps, the efficiency curve is divided into two stages. Then, the affinity law of pumps and a Gaussian process regression (GPR) model are explored and utilized to predict the efficiency at their suitable flow stages, respectively. Furthermore, a probability index is established through the prediction variance of a GPR model and Bayesian inference to select a suitable training set to improve the prediction accuracy. Experimental results show the superiority of the hybrid modeling method, compared with only using mechanism or data-driven models. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

Figure 1
<p>The diagram of experimental system for measuring the centrifugal pump efficiency.</p>
Full article ">Figure 2
<p>The experimental system for measuring the centrifugal pump efficiency.</p>
Full article ">Figure 3
<p>The centrifugal pump efficiency changes with the flow rate at different speeds.</p>
Full article ">Figure 4
<p>The curve of <span class="html-italic">K</span> values with valve opening at different speeds.</p>
Full article ">Figure 5
<p>Hybrid model for prediction of the centrifugal pump efficiency.</p>
Full article ">Figure 6
<p>Modeling flowchart for the efficiency prediction of centrifugal pump at different rotational speeds.</p>
Full article ">Figure 7
<p>(<b>a</b>) GPR models trained by different sample subsets predict the test set <math display="inline"><semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>S</mi> </mstyle> <mn>2</mn> </msub> </mrow> </semantics></math> to obtain the corresponding MEPP and RMSE values (<b>b</b>) GPR models trained by different sample subsets predict the test set <math display="inline"><semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>S</mi> </mstyle> <mn>4</mn> </msub> </mrow> </semantics></math> to obtain the corresponding MEPP and RMSE values (<b>c</b>) GPR models trained by different sample subsets predict the test set <math display="inline"><semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>S</mi> </mstyle> <mn>7</mn> </msub> </mrow> </semantics></math> to obtain the corresponding MEPP and RMSE values (<b>d</b>) GPR models trained by different sample subsets predict the test set <math display="inline"><semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>S</mi> </mstyle> <mn>8</mn> </msub> </mrow> </semantics></math> to obtain the corresponding MEPP and RMSE values.</p>
Full article ">Figure 7 Cont.
<p>(<b>a</b>) GPR models trained by different sample subsets predict the test set <math display="inline"><semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>S</mi> </mstyle> <mn>2</mn> </msub> </mrow> </semantics></math> to obtain the corresponding MEPP and RMSE values (<b>b</b>) GPR models trained by different sample subsets predict the test set <math display="inline"><semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>S</mi> </mstyle> <mn>4</mn> </msub> </mrow> </semantics></math> to obtain the corresponding MEPP and RMSE values (<b>c</b>) GPR models trained by different sample subsets predict the test set <math display="inline"><semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>S</mi> </mstyle> <mn>7</mn> </msub> </mrow> </semantics></math> to obtain the corresponding MEPP and RMSE values (<b>d</b>) GPR models trained by different sample subsets predict the test set <math display="inline"><semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>S</mi> </mstyle> <mn>8</mn> </msub> </mrow> </semantics></math> to obtain the corresponding MEPP and RMSE values.</p>
Full article ">Figure 8
<p>(<b>a</b>) The LGPR model and the GPR model prediction results and relative error of the small-flow stage of the test set <math display="inline"><semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>S</mi> </mstyle> <mn>2</mn> </msub> </mrow> </semantics></math> (<b>b</b>) The LGPR model and the GPR model prediction results and relative error of the small-flow stage of the test set <math display="inline"><semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>S</mi> </mstyle> <mn>4</mn> </msub> </mrow> </semantics></math> (<b>c</b>) The LGPR model and the GPR model prediction results and relative error of the small-flow stage of the test set <math display="inline"><semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>S</mi> </mstyle> <mn>7</mn> </msub> </mrow> </semantics></math> (<b>d</b>) The LGPR model and the GPR model prediction results and relative error of the small-flow stage of the test set <math display="inline"><semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>S</mi> </mstyle> <mn>8</mn> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 8 Cont.
<p>(<b>a</b>) The LGPR model and the GPR model prediction results and relative error of the small-flow stage of the test set <math display="inline"><semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>S</mi> </mstyle> <mn>2</mn> </msub> </mrow> </semantics></math> (<b>b</b>) The LGPR model and the GPR model prediction results and relative error of the small-flow stage of the test set <math display="inline"><semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>S</mi> </mstyle> <mn>4</mn> </msub> </mrow> </semantics></math> (<b>c</b>) The LGPR model and the GPR model prediction results and relative error of the small-flow stage of the test set <math display="inline"><semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>S</mi> </mstyle> <mn>7</mn> </msub> </mrow> </semantics></math> (<b>d</b>) The LGPR model and the GPR model prediction results and relative error of the small-flow stage of the test set <math display="inline"><semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>S</mi> </mstyle> <mn>8</mn> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 8 Cont.
<p>(<b>a</b>) The LGPR model and the GPR model prediction results and relative error of the small-flow stage of the test set <math display="inline"><semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>S</mi> </mstyle> <mn>2</mn> </msub> </mrow> </semantics></math> (<b>b</b>) The LGPR model and the GPR model prediction results and relative error of the small-flow stage of the test set <math display="inline"><semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>S</mi> </mstyle> <mn>4</mn> </msub> </mrow> </semantics></math> (<b>c</b>) The LGPR model and the GPR model prediction results and relative error of the small-flow stage of the test set <math display="inline"><semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>S</mi> </mstyle> <mn>7</mn> </msub> </mrow> </semantics></math> (<b>d</b>) The LGPR model and the GPR model prediction results and relative error of the small-flow stage of the test set <math display="inline"><semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>S</mi> </mstyle> <mn>8</mn> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p>(<b>a</b>) The hybrid model, the GPR model and the mechanism model prediction results and relative error of the test set <math display="inline"><semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>S</mi> </mstyle> <mn>2</mn> </msub> </mrow> </semantics></math> (<b>b</b>) The hybrid model, the GPR model and the mechanism model prediction results and relative error of the test set <math display="inline"><semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>S</mi> </mstyle> <mn>4</mn> </msub> </mrow> </semantics></math> (<b>c</b>) The hybrid model, the GPR model and the mechanism model prediction results and relative error of the test set <math display="inline"><semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>S</mi> </mstyle> <mn>7</mn> </msub> </mrow> </semantics></math> (<b>d</b>) The hybrid model, the GPR model and the mechanism model prediction results and relative error of the test set <math display="inline"><semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>S</mi> </mstyle> <mn>8</mn> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 9 Cont.
<p>(<b>a</b>) The hybrid model, the GPR model and the mechanism model prediction results and relative error of the test set <math display="inline"><semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>S</mi> </mstyle> <mn>2</mn> </msub> </mrow> </semantics></math> (<b>b</b>) The hybrid model, the GPR model and the mechanism model prediction results and relative error of the test set <math display="inline"><semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>S</mi> </mstyle> <mn>4</mn> </msub> </mrow> </semantics></math> (<b>c</b>) The hybrid model, the GPR model and the mechanism model prediction results and relative error of the test set <math display="inline"><semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>S</mi> </mstyle> <mn>7</mn> </msub> </mrow> </semantics></math> (<b>d</b>) The hybrid model, the GPR model and the mechanism model prediction results and relative error of the test set <math display="inline"><semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>S</mi> </mstyle> <mn>8</mn> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 9 Cont.
<p>(<b>a</b>) The hybrid model, the GPR model and the mechanism model prediction results and relative error of the test set <math display="inline"><semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>S</mi> </mstyle> <mn>2</mn> </msub> </mrow> </semantics></math> (<b>b</b>) The hybrid model, the GPR model and the mechanism model prediction results and relative error of the test set <math display="inline"><semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>S</mi> </mstyle> <mn>4</mn> </msub> </mrow> </semantics></math> (<b>c</b>) The hybrid model, the GPR model and the mechanism model prediction results and relative error of the test set <math display="inline"><semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>S</mi> </mstyle> <mn>7</mn> </msub> </mrow> </semantics></math> (<b>d</b>) The hybrid model, the GPR model and the mechanism model prediction results and relative error of the test set <math display="inline"><semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>S</mi> </mstyle> <mn>8</mn> </msub> </mrow> </semantics></math>.</p>
Full article ">
12 pages, 9397 KiB  
Article
Usefulness of an Additional Filter Created Using 3D Printing for Whole-Body X-ray Imaging with a Long-Length Detector
by Hyunsoo Seo, Wooyoung Kim, Bongju Han, Huimin Jang, Myeong Seong Yoon and Youngjin Lee
Sensors 2022, 22(11), 4299; https://doi.org/10.3390/s22114299 - 6 Jun 2022
Cited by 1 | Viewed by 2877
Abstract
We recently developed a long-length detector that combines three detectors and successfully acquires whole-body X-ray images. Although the developed detector system can efficiently acquire whole-body images in a short time, it may show problems with diagnostic performance in some areas owing to the [...] Read more.
We recently developed a long-length detector that combines three detectors and successfully acquires whole-body X-ray images. Although the developed detector system can efficiently acquire whole-body images in a short time, it may show problems with diagnostic performance in some areas owing to the use of high-energy X-rays during whole-spine and long-length examinations. In particular, during examinations of relatively thin bones, such as ankles, with a long-length detector, the image quality deteriorates because of an increase in X-ray transmission. An additional filter is primarily used to address this limitation, but this approach imposes a higher load on the X-ray tube to compensate for reductions in the radiation dose and the problem of high manufacturing costs. Thus, in this study, a newly designed additional filter was fabricated using 3D printing technology to improve the applicability of the long-length detector. Whole-spine anterior–posterior (AP), lateral, and long-leg AP X-ray examinations were performed using 3D-printed additional filters composed of 14 mm thick aluminum (Al) or 14 mm thick Al + 1 mm thick copper (Cu) composite material. The signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and radiation dose for the acquired X-ray images were evaluated to demonstrate the usefulness of the filters. Under all X-ray inspection conditions, the most effective data were obtained when the composite additional filter based on a 14 mm thick Al + 1 mm thick Cu material was used. We confirmed that an SNR improvement of up to 46%, CNR improvement of 37%, and radiation dose reduction of 90% could be achieved in the X-ray images obtained using the composite additional filter in comparison to the images obtained with no filter. The results proved that the additional filter made with a 3D printer was effective in improving image quality and reducing the radiation dose for X-ray images obtained using a long-length detector. Full article
(This article belongs to the Special Issue Advanced Materials and Technologies for Radiation Detectors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Whole-body phantom and (<b>b</b>) long-length detector used in the study.</p>
Full article ">Figure 2
<p>Material and equipment used in the experiment: (<b>a</b>) using the Ultimaker 3D printer; (<b>b</b>) long-leg filter 3D modeling using the CAD program; (<b>c</b>) side view of the completed 14 mm thick Al whole-spine filter; (<b>d</b>) side view of the completed 14 mm thick Al long-leg filter; (<b>e</b>) anterior view of the completed 14 mm thick Al + 1 mm thick Cu composite whole-spine filter; (<b>f</b>) side view of the completed 14 mm thick Al + 1 mm thick Cu composite whole-spine filter; (<b>g</b>) side view of the completed 14 mm thick Al + 1 mm thick Cu composite long-leg filter; (<b>h</b>) filter installed for use in experiments.</p>
Full article ">Figure 3
<p>ROI setting in the whole-spine AP image: (<b>a</b>) ROI<sub>target</sub> and ROI<sub>background</sub> of C-3; (<b>b</b>) ROI<sub>target</sub> and ROI<sub>background</sub> of T-6; (<b>c</b>) ROI<sub>target</sub> and ROI<sub>background</sub> of L-3.</p>
Full article ">Figure 4
<p>ROI setting in the whole-spine LAT image: (<b>a</b>) ROI<sub>target</sub> and ROI<sub>background</sub> of C-3; (<b>b</b>) ROI<sub>target</sub> and ROI<sub>background</sub> of T-6; (<b>c</b>) ROI<sub>target</sub> and ROI<sub>background</sub> of L-3.</p>
Full article ">Figure 5
<p>ROI setting in the long-leg AP image: (<b>a</b>) ROI<sub>target</sub> and ROI<sub>background</sub> of the pelvis; (<b>b</b>) ROI<sub>target</sub> and ROI<sub>background</sub> of the knee; (<b>c</b>) ROI<sub>target</sub> and ROI<sub>background</sub> of the ankle.</p>
Full article ">Figure 6
<p>Acquired whole-spine AP X-ray images without and with additional filters. (<b>a</b>) Whole-spine AP image (<b>a</b>) without an additional filter, (<b>b</b>) with a 14 mm thick Al additional filter, and (<b>c</b>) with a 14 mm thick Al + 1 mm thick Cu composite additional filter.</p>
Full article ">Figure 7
<p>Acquired whole-spine LAT X-ray images obtained without and with additional filters. (<b>a</b>) Whole-spine LAT image (<b>a</b>) without an additional filter, (<b>b</b>) with a 14 mm thick Al additional filter, and (<b>c</b>) with a 14 mm thick Al + 1 mm thick Cu composite additional filter.</p>
Full article ">Figure 8
<p>Acquired long-leg AP X-ray images without and with additional filters. (<b>a</b>) Long-leg AP image (<b>a</b>) without an additional filter, (<b>b</b>) with a 14 mm thick Al additional filter, and (<b>c</b>) with a 14 mm thick Al + 1 mm thick Cu composite additional filter.</p>
Full article ">Figure 9
<p>Graphs of the SNR and CNR results evaluated with and without additional filters. SNR results for (<b>a</b>) whole-spine AP, (<b>b</b>) whole-spine LAT, and (<b>c</b>) long-leg AP examinations. CNR results for (<b>d</b>) whole-spine AP, (<b>e</b>) whole-spine LAT, and (<b>f</b>) long-leg AP examinations.</p>
Full article ">Figure 10
<p>Graph of the evaluated radiation dose (mSv) in examinations with and without additional filters.</p>
Full article ">
13 pages, 788 KiB  
Article
Towards Convolutional Neural Network Acceleration and Compression Based on Simonk-Means
by Mingjie Wei, Yunping Zhao, Xiaowen Chen, Chen Li and Jianzhuang Lu
Sensors 2022, 22(11), 4298; https://doi.org/10.3390/s22114298 - 6 Jun 2022
Cited by 1 | Viewed by 1950
Abstract
Convolutional Neural Networks (CNNs) are popular models that are widely used in image classification, target recognition, and other fields. Model compression is a common step in transplanting neural networks into embedded devices, and it is often used in the retraining stage. However, it [...] Read more.
Convolutional Neural Networks (CNNs) are popular models that are widely used in image classification, target recognition, and other fields. Model compression is a common step in transplanting neural networks into embedded devices, and it is often used in the retraining stage. However, it requires a high expenditure of time by retraining weight data to atone for the loss of precision. Unlike in prior designs, we propose a novel model compression approach based on Simonk-means, which is specifically designed to support a hardware acceleration scheme. First, we propose an extension algorithm named Simonk-means based on simple k-means. We use Simonk-means to cluster trained weights in convolutional layers and fully connected layers. Second, we reduce the consumption of hardware resources in data movement and storage by using a data storage and index approach. Finally, we provide the hardware implementation of the compressed CNN accelerator. Our evaluations on several classifications show that our design can achieve 5.27× compression and reduce 74.3% of the multiply–accumulate (MAC) operations in AlexNet on the FASHION-MNIST dataset. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>Profiling for convolutional neural networks.</p>
Full article ">Figure 2
<p>Overview of the design.</p>
Full article ">Figure 3
<p>Example of encoding.</p>
Full article ">Figure 4
<p>Example of the hardware acceleration strategy.</p>
Full article ">Figure 5
<p>Histograms to demonstrate the symmetric nature of the weight distribution in fully connected layers on FASHION-MNIST.</p>
Full article ">Figure 6
<p>Hardware architecture: The red and blue arrows correspond to the data flow of the input images and weights.</p>
Full article ">Figure 7
<p>Accuracy of LeNet-5 for each fully connected layer with different values of <span class="html-italic">k</span>.</p>
Full article ">Figure 8
<p>Accuracy of ResNet50 for each fully connected layer with different values of <span class="html-italic">k</span>.</p>
Full article ">Figure 9
<p>Compression ratio of fully connected layers of the neural network with different values of <math display="inline"><semantics> <mi mathvariant="italic">k</mi> </semantics></math>.</p>
Full article ">
21 pages, 7687 KiB  
Article
BrainGAN: Brain MRI Image Generation and Classification Framework Using GAN Architectures and CNN Models
by Halima Hamid N. Alrashedy, Atheer Fahad Almansour, Dina M. Ibrahim and Mohammad Ali A. Hammoudeh
Sensors 2022, 22(11), 4297; https://doi.org/10.3390/s22114297 - 6 Jun 2022
Cited by 43 | Viewed by 7193
Abstract
Deep learning models have been used in several domains, however, adjusting is still required to be applied in sensitive areas such as medical imaging. As the use of technology in the medical domain is needed because of the time limit, the level of [...] Read more.
Deep learning models have been used in several domains, however, adjusting is still required to be applied in sensitive areas such as medical imaging. As the use of technology in the medical domain is needed because of the time limit, the level of accuracy assures trustworthiness. Because of privacy concerns, machine learning applications in the medical field are unable to use medical data. For example, the lack of brain MRI images makes it difficult to classify brain tumors using image-based classification. The solution to this challenge was achieved through the application of Generative Adversarial Network (GAN)-based augmentation techniques. Deep Convolutional GAN (DCGAN) and Vanilla GAN are two examples of GAN architectures used for image generation. In this paper, a framework, denoted as BrainGAN, for generating and classifying brain MRI images using GAN architectures and deep learning models was proposed. Consequently, this study proposed an automatic way to check that generated images are satisfactory. It uses three models: CNN, MobileNetV2, and ResNet152V2. Training the deep transfer models with images made by Vanilla GAN and DCGAN, and then evaluating their performance on a test set composed of real brain MRI images. From the results of the experiment, it was found that the ResNet152V2 model outperformed the other two models. The ResNet152V2 achieved 99.09% accuracy, 99.12% precision, 99.08% recall, 99.51% area under the curve (AUC), and 0.196 loss based on the brain MRI images generated by DCGAN architecture. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>Our proposed BrainGAN framework starting with Brain MRI dataset real images, generating images using DCGAN and Vanilla GAN, deep learning models CNN, MobileNetV2, and ResNet152V2, and finally the testing &amp; validation.</p>
Full article ">Figure 2
<p>The Pseudo-code of the proposed BrainGAN framework.</p>
Full article ">Figure 3
<p>MRI scan images for two classes (<b>a</b>) No tumor samples images, and (<b>b</b>) Tumor sample images.</p>
Full article ">Figure 4
<p>The proposed architecture of the Vanilla GANs to generate MRI images.</p>
Full article ">Figure 5
<p>The proposed architecture of the DCGANs to generate MRI images.</p>
Full article ">Figure 6
<p>MRI scan images that are generated by applying Vanilla GANs for (<b>a</b>) Vanilla GANs no tumor images, and (<b>b</b>) Vanilla GANs tumor images.</p>
Full article ">Figure 7
<p>MRI scan images that are generated by applying DCGANs for (<b>a</b>) DCGANs no tumor images, and (<b>b</b>) DCGANs tumor images.</p>
Full article ">Figure 8
<p>Confusion matrix for the proposed CNN model: (<b>a</b>) using Vanilla GAN image generated; (<b>b</b>) using DCGAN image generated.</p>
Full article ">Figure 9
<p>Loss, AUC, precision, recall, and accuracy between the training and validation phases with the number of epochs for the CNN model using DCGAN image generated.</p>
Full article ">Figure 10
<p>Confusion matrix for the proposed MobileNetV2 model: (<b>a</b>) using Vanilla GAN image generated; (<b>b</b>) using DCGAN image generated.</p>
Full article ">Figure 11
<p>Loss, AUC, precision, recall, and accuracy between the training and validation phases with the number of epochs for the MobileNetV2 model using DCGAN image generated.</p>
Full article ">Figure 12
<p>Confusion matrix for the proposed ResNet152V2 model: (<b>a</b>) using Vanilla GAN image generated; (<b>b</b>) using DCGAN image generated.</p>
Full article ">Figure 13
<p>Loss, AUC, precision, recall, and accuracy between the training and validation phases with the number of epochs for the ResNet152V2 model using DCGAN image generated.</p>
Full article ">Figure 14
<p>Loss measures for the CNN, MobileNetV2, ResNet152V2 models using Vanilla GAN and DCGAN image generated.</p>
Full article ">Figure 15
<p>Accuracy, precision, recall, and area under the curve (AUC) measures for the proposed CNN, MobileNetV2, ResNet152V2 models using Vanilla GAN and DCGAN image generated.</p>
Full article ">Figure 16
<p>Accuracy performance metrics comparison between our proposed models and previous studies [<a href="#B11-sensors-22-04297" class="html-bibr">11</a>,<a href="#B18-sensors-22-04297" class="html-bibr">18</a>,<a href="#B20-sensors-22-04297" class="html-bibr">20</a>].</p>
Full article ">Figure 17
<p>Precision performance metrics comparison between our proposed models and previous studies [<a href="#B18-sensors-22-04297" class="html-bibr">18</a>].</p>
Full article ">Figure 18
<p>Recall performance metrics comparison between our proposed models and previous studies [<a href="#B18-sensors-22-04297" class="html-bibr">18</a>].</p>
Full article ">
13 pages, 1599 KiB  
Article
Micro-Expression Recognition Based on Optical Flow and PCANet+
by Shiqi Wang, Suen Guan, Hui Lin, Jianming Huang, Fei Long and Junfeng Yao
Sensors 2022, 22(11), 4296; https://doi.org/10.3390/s22114296 - 5 Jun 2022
Cited by 8 | Viewed by 3210
Abstract
Micro-expressions are rapid and subtle facial movements. Different from ordinary facial expressions in our daily life, micro-expressions are very difficult to detect and recognize. In recent years, due to a wide range of potential applications in many domains, micro-expression recognition has aroused extensive [...] Read more.
Micro-expressions are rapid and subtle facial movements. Different from ordinary facial expressions in our daily life, micro-expressions are very difficult to detect and recognize. In recent years, due to a wide range of potential applications in many domains, micro-expression recognition has aroused extensive attention from computer vision. Because available micro-expression datasets are very small, deep neural network models with a huge number of parameters are prone to over-fitting. In this article, we propose an OF-PCANet+ method for micro-expression recognition, in which we design a spatiotemporal feature learning strategy based on shallow PCANet+ model, and we incorporate optical flow sequence stacking with the PCANet+ network to learn discriminative spatiotemporal features. We conduct comprehensive experiments on publicly available SMIC and CASME2 datasets. The results show that our lightweight model obviously outperforms popular hand-crafted methods and also achieves comparable performances with deep learning based methods, such as 3D-FCNN and ELRCN. Full article
Show Figures

Figure 1

Figure 1
<p>The framework of the proposed ME recognition method.</p>
Full article ">Figure 2
<p>Example of optical flow motion estimation, where we set the first frame of ME image sequence as the reference frame and then compute the optical flow field between the reference frame and the rest of the frames with a subspace trajectory model.</p>
Full article ">Figure 3
<p>Illustration of stacking optical flow sequences into multi-channel images.</p>
Full article ">Figure 4
<p>The frames of a sample video clip (happiness) in CASME2 dataset.</p>
Full article ">Figure 5
<p>The visualization of feature maps produced in each layer for an input video clip from CASME2 dataset.</p>
Full article ">
16 pages, 2449 KiB  
Article
Multi-Modal Vehicle Trajectory Prediction by Collaborative Learning of Lane Orientation, Vehicle Interaction, and Intention
by Wei Tian, Songtao Wang, Zehan Wang, Mingzhi Wu, Sihong Zhou and Xin Bi
Sensors 2022, 22(11), 4295; https://doi.org/10.3390/s22114295 - 5 Jun 2022
Cited by 9 | Viewed by 3371
Abstract
Accurate trajectory prediction is an essential task in automated driving, which is achieved by sensing and analyzing the behavior of surrounding vehicles. Although plenty of research works have been invested in this field, it is still a challenging subject due to the environment’s [...] Read more.
Accurate trajectory prediction is an essential task in automated driving, which is achieved by sensing and analyzing the behavior of surrounding vehicles. Although plenty of research works have been invested in this field, it is still a challenging subject due to the environment’s complexity and the driving intention uncertainty. In this paper, we propose a joint learning architecture to incorporate the lane orientation, vehicle interaction, and driving intention in vehicle trajectory forecasting. This work employs a coordinate transform to encode the vehicle trajectory with lane orientation information, which is further incorporated into various interaction models to explore the mutual trajectory relations. Extracted features are applied in a dual-level stochastic choice learning to distinguish the trajectory modality at both the intention and motion levels. By collaborative learning of lane orientation, interaction, and intention, our approach can be applied to both highway and urban scenes. Experiments on the NGSIM, HighD, and Argoverse datasets demonstrate that the proposed method achieves a significant improvement in prediction accuracy compared with the baseline. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

Figure 1
<p>The multi-modal vehicle trajectory prediction framework based on interaction modeling and lane orientation information. Related lane centerline is depicted in red. Predicted trajectories are represented in yellow, while historical trajectories are in green.</p>
Full article ">Figure 2
<p>Three interaction model embedding methods in the LSTM encoder–decoder network. (<b>a</b>) Embedding interaction model only at the current frame. (<b>b</b>) Embedding interaction model at each frame. (<b>c</b>) Embedding interactive model with spatial–temporal coupling.</p>
Full article ">Figure 2 Cont.
<p>Three interaction model embedding methods in the LSTM encoder–decoder network. (<b>a</b>) Embedding interaction model only at the current frame. (<b>b</b>) Embedding interaction model at each frame. (<b>c</b>) Embedding interactive model with spatial–temporal coupling.</p>
Full article ">Figure 3
<p>Coordinate transform of historical vehicle trajectory.</p>
Full article ">Figure 4
<p>Visualization of the trajectories directly predicted in the world coordinate and by the proposed framework (which can be better viewed in color).</p>
Full article ">Figure 5
<p>Visualization of multi-modal prediction results in different urban road scenarios (which can be better viewed in color). (<b>a</b>) Prediction at intersection. (<b>b</b>) Prediction at T-junction. (<b>c</b>) Prediction at merging.</p>
Full article ">
15 pages, 5097 KiB  
Article
Research on Dynamic Measurement Method of Flow Rate in Tea Processing
by Zhangfeng Zhao, Gaohong Liu, Yueliang Wang, Jiyu Peng, Xin Qiao and Jiang Zhong
Sensors 2022, 22(11), 4294; https://doi.org/10.3390/s22114294 - 5 Jun 2022
Cited by 1 | Viewed by 1932
Abstract
Tea flow rate is a key indicator in tea production and processing. Due to the small real−time flow of tea leaves on the production line, the noise caused by the transmission system is greater than or close to the real signal of tea [...] Read more.
Tea flow rate is a key indicator in tea production and processing. Due to the small real−time flow of tea leaves on the production line, the noise caused by the transmission system is greater than or close to the real signal of tea leaves. This issue may affect the dynamic measurement accuracy of tea flow. Therefore, a variational mode decomposition combined with a wavelet threshold (VMD−WT) denoising method is proposed to improve the accuracy of tea flow measurement. The denoising method of the tea flow signal based on VMD−WT is established, and the results are compared with WT, VMD, empirical mode decomposition (EMD), and empirical mode decomposition combined with wavelet threshold (EMD−WT). In addition, the dynamic measurement of different tea flow in tea processing is carried out. The result shows that the main noise of tea flow measurement comes from mechanical vibration. The VMD−WT method can effectively remove the noise in the tea dynamic weighing signal, and the denoising performance is better than WT, VMD, EMD, and EMD−WT methods. The average cumulative measurement accuracy of the tea flow signal based on the VMD−WT algorithm is 0.88%, which is 55% higher than that before denoising. This study provides an effective method for dynamic and accurate measurement of tea flow and offers technical support for digital control of the tea processing. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the experimental device.</p>
Full article ">Figure 2
<p>Flow chart of denoising of tea dynamic weighing signal based on VMD−WT.</p>
Full article ">Figure 3
<p>Spectrum analysis of the non−load and loaded signals of the electronic belt scale: (<b>a</b>) Non−load signal; (<b>b</b>) Loaded signal.</p>
Full article ">Figure 4
<p>Instantaneous frequency mean value with different <span class="html-italic">K</span> values.</p>
Full article ">Figure 5
<p>VMD decomposition results and spectrum analysis.</p>
Full article ">Figure 6
<p>VMD and VMD−WT denoising results of the tea dynamic weighing signal: (<b>a</b>) VMD denoising result and spectrum analysis; (<b>b</b>) VMD−WT denoising result and spectrum analysis.</p>
Full article ">Figure 7
<p>Comparison of VMD and VMD−WT denoising results of the tea dynamic weighing signal.</p>
Full article ">Figure 8
<p>Frequency spectrum of the tea dynamic weighing signal with different denoising methods.</p>
Full article ">Figure 9
<p>SNR of the tea dynamic weighing signal with different denoising methods.</p>
Full article ">Figure A1
<p>Spectrum diagram of non−load signal.</p>
Full article ">Figure A2
<p>Spectrum diagram of loaded signal.</p>
Full article ">Figure A3
<p>Decomposition result of the EMD method.</p>
Full article ">
39 pages, 11290 KiB  
Article
A Path-Following Controller for Marine Vehicles Using a Two-Scale Inner-Outer Loop Approach
by Pramod Maurya, Helio Mitio Morishita, Antonio Pascoal and A. Pedro Aguiar
Sensors 2022, 22(11), 4293; https://doi.org/10.3390/s22114293 - 5 Jun 2022
Cited by 10 | Viewed by 3639
Abstract
This article addresses the problem of path following of marine vehicles along straight lines in the presence of currents by resorting to an inner-outer control loop strategy, with due account for the presence of currents. The inner-outer loop control structures exhibit a fast-slow [...] Read more.
This article addresses the problem of path following of marine vehicles along straight lines in the presence of currents by resorting to an inner-outer control loop strategy, with due account for the presence of currents. The inner-outer loop control structures exhibit a fast-slow temporal scale separation that yields simple “rules of thumb” for controller tuning. Stated intuitively, the inner-loop dynamics should be much faster than those of the outer loop. Conceptually, the procedure described has three key advantages: (i) it decouples the design of the inner and outer control loops, (ii) the structure of the outer-loop controller does not require exact knowledge of the vehicle dynamics, and (iii) it provides practitioners a very convenient method to effectively implement path-following controllers on a wide range of vehicles. The path-following controller discussed in this article is designed at the kinematic outer loop that commands the inner loop with the desired heading angles while the vehicle moves at an approximately constant speed. The key underlying idea is to provide a seamless implementation of path-following control algorithms on heterogeneous vehicles, which are often equipped with heading autopilots. To this end, we assume that the heading control system is characterized in terms of an IOS-like relationship without detailed knowledge of vehicle dynamics parameters. This paper quantitatively evaluates the combined inner-outer loop to obtain a relationship for assessing the combined system’s stability. The methods used are based on nonlinear control theory, wherein the cascade and feedback systems of interest are characterized in terms of their IOS properties. We use the IOS small-gain theorem to obtain quantitative relationships for controller tuning that are applicable to a broad range of marine vehicles. Tests with AUVs and one ASV in real-life conditions have shown the efficacy of the path-following control structure developed. Full article
Show Figures

Figure 1

Figure 1
<p>Heterogeneous vehicle used to demonstrate cooperative motion control during GREX trails at Sesimbra, Portugal.</p>
Full article ">Figure 2
<p>Notations and reference frames for an AUV.</p>
Full article ">Figure 3
<p>Medusa Autonomous Marine Vehicles, developed at DSOR, IST, Lisbon.</p>
Full article ">Figure 4
<p>The MAYA Autonomous underwater vehicle developed at NIO, Goa.</p>
Full article ">Figure 5
<p>Marine vehicle body reference frame showing the cross-track error.</p>
Full article ">Figure 6
<p>Differentiable saturation function.</p>
Full article ">Figure 7
<p>Path-following controller with two-scale inner-outer loop approach.</p>
Full article ">Figure 8
<p>IOS characterization of inner-outer loop.</p>
Full article ">Figure 9
<p>General feedback interconnection.</p>
Full article ">Figure 10
<p>Line-of-sight guidance using <span class="html-italic">look-ahead distance</span>.</p>
Full article ">Figure 11
<p>Look-ahead distance plotted against the cross-track error with different gains.</p>
Full article ">Figure 12
<p>Cross-track error for straight-line following.</p>
Full article ">Figure 13
<p>Implementation of the path-following algorithm using an anti-windup technique scheme that includes the so-called D-methodology in [<a href="#B46-sensors-22-04293" class="html-bibr">46</a>].</p>
Full article ">Figure 14
<p>The DELFIM<math display="inline"><semantics> <msub> <mrow/> <mi mathvariant="normal">x</mi> </msub> </semantics></math> ASV.</p>
Full article ">Figure 15
<p>The MAYA AUV.</p>
Full article ">Figure 16
<p>Simulated lawnmowing maneuver of the <math display="inline"><semantics> <mrow> <mi>D</mi> <mi>E</mi> <mi>L</mi> <mi>F</mi> <mi>I</mi> <msub> <mi>M</mi> <mi>x</mi> </msub> </mrow> </semantics></math> vehicle in the presence of ocean currents.</p>
Full article ">Figure 17
<p>Cross-track error for the simulated track.</p>
Full article ">Figure 18
<p><math display="inline"><semantics> <msub> <mi>Delfim</mi> <mi mathvariant="normal">x</mi> </msub> </semantics></math> performing a lawn-mowing maneuver in the Azores, PT.</p>
Full article ">Figure 19
<p><math display="inline"><semantics> <msub> <mi>Delfim</mi> <mi mathvariant="normal">x</mi> </msub> </semantics></math> cross-track error during the real mission.</p>
Full article ">Figure 20
<p>Square mission of MAYA at surface, 3 m and 5 m depth at Supa Dam, India.</p>
Full article ">Figure 21
<p>Medusa Vehicle performing lawnmower at Expo Site, Lisbon, Portugal.</p>
Full article ">Figure 22
<p>Heading and Course of the Medusa Vehicle Showing the effect of ocean currents.</p>
Full article ">Figure 23
<p>Simulation result of arc following.</p>
Full article ">Figure 24
<p>Evolution of cross-track error during arc following.</p>
Full article ">
23 pages, 4135 KiB  
Article
Designing Multimodal Interactive Dashboard of Disaster Management Systems
by Abeer AlAbdulaali, Amna Asif, Shaheen Khatoon and Majed Alshamari
Sensors 2022, 22(11), 4292; https://doi.org/10.3390/s22114292 - 5 Jun 2022
Cited by 11 | Viewed by 4570
Abstract
Disasters and crises are inevitable in this world. In the aftermath of a disaster, a society’s overall growth, resources, and economy are greatly affected as they cause damages from minor to huge proportions. Around the world, countries are interested in improving their emergency [...] Read more.
Disasters and crises are inevitable in this world. In the aftermath of a disaster, a society’s overall growth, resources, and economy are greatly affected as they cause damages from minor to huge proportions. Around the world, countries are interested in improving their emergency decision-making. The institutions are paying attention to collecting different types of data related to crisis information from various resources, including social media, to improve their emergency response. Previous efforts have focused on collecting, extracting, and classifying crisis data from text, audio, video, or files; however, the development of user-friendly multimodal disaster data dashboards to support human-to-system interactions during an emergency response has received little attention. Our paper seeks to fill this gap by proposing usable designs of interactive dashboards to present multimodal disaster information. For this purpose, we first investigated social media data and metadata for the required elicitation and analysis purposes. These requirements are then used to develop interactive multimodal dashboards to present complex disaster information in a usable manner. To validate our multimodal dashboard designs, we have conducted a heuristic evaluation. Experts have evaluated the interactive disaster dashboards using a customized set of heuristics. The overall assessment showed positive feedback from the evaluators. The proposed interactive multimodal dashboards complement the existing techniques of collecting textual, image, audio, and video emergency information and their classifications for usable presentation. The contribution will help the emergency response personnel in terms of useful information and observations for prompt responses to avoid significant damage. Full article
Show Figures

Figure 1

Figure 1
<p>The components of the social media-based incident detection and monitoring system, and the data visualization architecture.</p>
Full article ">Figure 2
<p>The methodology for developing a multimodal data visualization framework.</p>
Full article ">Figure 3
<p>A summary of required disaster-related data collection from social media and their classifications.</p>
Full article ">Figure 4
<p>(<b>a</b>) Depicts a severely damaged bridge after an earthquake in New Zealand. (<b>b</b>) Depicts a mildly damaged bridge after an earthquake in Chile.</p>
Full article ">Figure 5
<p>Multimodal data visualization framework.</p>
Full article ">Figure 6
<p>Multi-monitor visual analytics elements. (<b>a</b>) Total case statistics, (<b>b-1</b>) live SNS feed, (<b>b-2</b>) heatmap, and (<b>c</b>) city emergency level map.</p>
Full article ">Figure 7
<p>Multi-monitor visual analytics elements. (<b>a</b>) Crisis categories ranking, (<b>b</b>) risk and sentimentally levels bar chart, (<b>c</b>) social traffic ranking, (<b>d</b>) keywords word cloud, (<b>e</b>) image gallery, and (<b>f</b>) image network.</p>
Full article ">Figure 8
<p>Multi-monitor visual analytics elements. (<b>a</b>) Video sentiment analysis. (<b>b</b>) audio map which includes: (1) username, (2) user display picture, (3) audio player, (4) metadata, (5) number of interactions, and (6) the risk analysis of the audio. (<b>c</b>) collaboration board that includes: (1) a map, (2) the board where the users can be categorized, (3) the emergency units’ information, and (4) the contact buttons.</p>
Full article ">Figure 9
<p>One-page flow visualization user interface of text, image, audio, and video disaster information. (<b>a</b>) the header and disaster types, (<b>b</b>) the live SNS feed is shown on the map, (<b>c</b>) user’s location on the map, and (<b>d</b>) call and message buttons.</p>
Full article ">
23 pages, 3834 KiB  
Article
RF eigenfingerprints, an Efficient RF Fingerprinting Method in IoT Context
by Louis Morge-Rollet, Frédéric Le Roy, Denis Le Jeune, Charles Canaff and Roland Gautier
Sensors 2022, 22(11), 4291; https://doi.org/10.3390/s22114291 - 5 Jun 2022
Cited by 4 | Viewed by 2658
Abstract
In IoT networks, authentication of nodes is primordial and RF fingerprinting is one of the candidates as a non-cryptographic method. RF fingerprinting is a physical-layer security method consisting of authenticated wireless devices using their components’ impairments. In this paper, we propose the RF [...] Read more.
In IoT networks, authentication of nodes is primordial and RF fingerprinting is one of the candidates as a non-cryptographic method. RF fingerprinting is a physical-layer security method consisting of authenticated wireless devices using their components’ impairments. In this paper, we propose the RF eigenfingerprints method, inspired by face recognition works called eigenfaces. Our method automatically learns important features using singular value decomposition (SVD), selects important ones using Ljung–Box test, and performs authentication based on a statistical model. We also propose simulation, real-world experiment, and FPGA implementation to highlight the performance of the method. Particularly, we propose a novel RF fingerprinting impairments model for simulation. The end of the paper is dedicated to a discussion about good properties of RF fingerprinting in IoT context, giving our method as an example. Indeed, RF eigenfingerprint has interesting properties such as good scalability, low complexity, and high explainability, making it a good candidate for implementation in IoT context. Full article
(This article belongs to the Special Issue Physical-Layer Security for Wireless Communications)
Show Figures

Figure 1

Figure 1
<p>Eigenfaces example.</p>
Full article ">Figure 2
<p>Methodology summary.</p>
Full article ">Figure 3
<p>Preprocessing process n°1.</p>
Full article ">Figure 4
<p>Preprocessing process n°2.</p>
Full article ">Figure 5
<p>Impairments model.</p>
Full article ">Figure 6
<p>Visualisation of learned features.</p>
Full article ">Figure 7
<p>Testbed of the experiment.</p>
Full article ">Figure 8
<p>Performances evaluation on real-world signals.</p>
Full article ">Figure 9
<p>Three-steps decision process.</p>
Full article ">
10 pages, 1525 KiB  
Article
Retina-like Computational Ghost Imaging for an Axially Moving Target
by Yingqiang Zhang, Jie Cao, Huan Cui, Dong Zhou, Bin Han and Qun Hao
Sensors 2022, 22(11), 4290; https://doi.org/10.3390/s22114290 - 5 Jun 2022
Cited by 3 | Viewed by 2348
Abstract
Unlike traditional optical imaging schemes, computational ghost imaging (CGI) provides a way to reconstruct images with the spatial distribution information of illumination patterns and the light intensity collected by a single-pixel detector or bucket detector. Compared with stationary scenes, the relative motion between [...] Read more.
Unlike traditional optical imaging schemes, computational ghost imaging (CGI) provides a way to reconstruct images with the spatial distribution information of illumination patterns and the light intensity collected by a single-pixel detector or bucket detector. Compared with stationary scenes, the relative motion between the target and the imaging system in a dynamic scene causes the degradation of reconstructed images. Therefore, we propose a time-variant retina-like computational ghost imaging method for axially moving targets. The illuminated patterns are specially designed with retina-like structures, and the radii of foveal region can be modified according to the axial movement of target. By using the time-variant retina-like patterns and compressive sensing algorithms, high-quality imaging results are obtained. Experimental verification has shown its effectiveness in improving the reconstruction quality of axially moving targets. The proposed method retains the inherent merits of CGI and provides a useful reference for high-quality GI reconstruction of a moving target. Full article
(This article belongs to the Collection Computational Imaging and Sensing)
Show Figures

Figure 1

Figure 1
<p>Experimental setup.</p>
Full article ">Figure 2
<p>Reconstruction of an axially moving target by RGI and VRGI with different measurements and differen t velocities.</p>
Full article ">Figure 3
<p>PSNR values of RGI and VRGI images: (<b>a</b>) PSNR of RGI and VRGI images at different velocities with 1024 measurements; (<b>b</b>) PSNR of RGI and VRGI images at different velocities with 1229 measurements; (<b>c</b>) PSNR of RGI and VRGI images at different velocities with 1434 measurements; and (<b>d</b>) PSNR of RGI and VRGI images at different velocities with 1638 measurements.</p>
Full article ">
13 pages, 4429 KiB  
Article
Doppler Modeling and Simulation of Train-to-Train Communication in Metro Tunnel Environment
by Pengyu Zhao, Xiaoyong Wang, Kai Zhang, Yanliang Jin and Guoxin Zheng
Sensors 2022, 22(11), 4289; https://doi.org/10.3390/s22114289 - 4 Jun 2022
Cited by 1 | Viewed by 2135
Abstract
The communication system of urban rail transit is gradually changing from train-to-ground (T2G) to train-to-train (T2T) communication. The subway can travel at speeds of up to 200 km/h in the tunnel environment, and communication between trains can be conducted via millimeter waves with [...] Read more.
The communication system of urban rail transit is gradually changing from train-to-ground (T2G) to train-to-train (T2T) communication. The subway can travel at speeds of up to 200 km/h in the tunnel environment, and communication between trains can be conducted via millimeter waves with minimum latency. A precise channel model is required to test the reliability of T2T communication over a non-line-of-sight (NLoS) Doppler channel in a tunnel scenario. In this paper, the description of the ray angle for a T2T communication terminal is established, and the mapping relationship of the multipath signals from the transmitter to the receiver is established. The channel parameters including the angle, amplitude, and mapping matrix from the transmitter to the receiver are obtained by the ray-tracing method. In addition, the channel model for the T2T communication system with multipath propagations is constructed. The Doppler spread simulation results in this paper are consistent with the RT simulation results. A channel physics modelling approach using an IQ vector phase shifter to achieve Doppler spread in the RF domain is proposed when paired with the Doppler spread model. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

Figure 1
<p>Three-dimensional map of T2T communication in tunnel scenario.</p>
Full article ">Figure 2
<p>Three views of T2T communication scenario: (<b>a</b>) transmitter, (<b>b</b>) receiver.</p>
Full article ">Figure 3
<p>Channel model based on spatial mirror method.</p>
Full article ">Figure 4
<p>Channel model based on geometric space random scatter.</p>
Full article ">Figure 5
<p>Schematic diagram of WI simulation tunnel.</p>
Full article ">Figure 6
<p>Multipath Signal Propagation Path.</p>
Full article ">Figure 7
<p>Angle and power of multipath signals (<b>a</b>) ZOD, (<b>b</b>) AOD, (<b>c</b>) ZOA, and (<b>d</b>) AOA.</p>
Full article ">Figure 8
<p>Receive and transmit signal mapping matrix.</p>
Full article ">Figure 9
<p>Doppler spread simulation results of T2T communication: (<b>a</b>) <span class="html-italic">v<sub>t</sub></span> = 160 km/h, <span class="html-italic">v<sub>r</sub></span> = 80 km/h, (<b>b</b>) <span class="html-italic">v<sub>t</sub></span> = 160 km/h, <span class="html-italic">v<sub>r</sub></span> = 160 km/h.</p>
Full article ">Figure 10
<p>The circuit structure of IQ vector phase shifter.</p>
Full article ">Figure 11
<p>Physical simulation model of the channel.</p>
Full article ">
26 pages, 42761 KiB  
Review
Chromism-Integrated Sensors and Devices for Visual Indicators
by Hyunho Seok, Sihoon Son, Jinill Cho, Sanghwan Choi, Kihong Park, Changmin Kim, Nari Jeon, Taesung Kim and Hyeong-U Kim
Sensors 2022, 22(11), 4288; https://doi.org/10.3390/s22114288 - 4 Jun 2022
Cited by 8 | Viewed by 3934
Abstract
The bifunctionality of chromism-integrated sensors and devices has been highlighted because of their reversibility, fast response, and visual indication. For example, one of the representative chromism electrochromic materials exhibits optical modulation under ion insertion/extraction by applying a potential. This operation mechanism can be [...] Read more.
The bifunctionality of chromism-integrated sensors and devices has been highlighted because of their reversibility, fast response, and visual indication. For example, one of the representative chromism electrochromic materials exhibits optical modulation under ion insertion/extraction by applying a potential. This operation mechanism can be integrated with various sensors (pressure, strain, biomolecules, gas, etc.) and devices (energy conversion/storage systems) as visual indicators for user-friendly operation. In this review, recent advances in the field of chromism-integrated systems for visual indicators are categorized for various chromism-integrated sensors and devices. This review can provide insights for researchers working on chromism, sensors, or devices. The integrated chromic devices are evaluated in terms of coloration-bleach operation, cycling stability, and coloration efficiency. In addition, the existing challenges and prospects for chromism-integrated sensors and devices are summarized for further research. Full article
(This article belongs to the Special Issue State-of-the Art in Gas Sensors based on Nanomaterials)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Wearable electrochromic devices (ECD) based on color-changeable e-skin integrated with a tactile sensor ((<b>a</b>–<b>d</b>) adapted with permission from ref. [<a href="#B22-sensors-22-04288" class="html-bibr">22</a>], copyright 2015 Nature Publishing Group and (<b>e</b>–<b>k</b>) adapted with permission from ref. [<a href="#B23-sensors-22-04288" class="html-bibr">23</a>], copyright 2020 American Chemical Society). (<b>a</b>) Chameleon-inspired e-skin by electrochromic polymer in poly (3-hexylthiophene-2, 5-diyl) (P3HT) with single-wall carbon nanotubes-coated pyramid layer of the tactile sensor (inset SEM image). (<b>b</b>) Schematic layout of the interactive color-changeable e-skin with the circuit and photos of PSEC. (<b>c</b>) The adsorption and time versus the low-pressure and high-pressure regimes of the designed e-skin. (<b>d</b>) Interactive color-changing and tactile-sensing e-skin. Depending on the strength of the handshake (tactile sensing), the color of the ECD changes (visual detection). (<b>e</b>) The circuit diagram with the working mechanism of the EC-tactile sensor for direct visualization of the stresses. (<b>f</b>) Interactive color change of ionic polyacrylamide (PAAm) organogel depending on the wrist flexion. (<b>g</b>) Pressing the PAAm organogel by finger and (<b>h</b>) direct stress distribution caused by the finger press. (<b>i</b>) Finite element analysis simulation of the stress distribution. (<b>j</b>) UV-vis absorption spectra with an optical image (inset) of the ionic PAAm organogel under varied potentials. (<b>k</b>) Cyclic voltammetric diagram of the PAAm organogel with or without (inset) 1-methyl-4,4’-bipyridinium iodide.</p>
Full article ">Figure 2
<p>Pressure sensor integrated with EC visual detection ((<b>a</b>–<b>c</b>) adapted with permission from ref. [<a href="#B24-sensors-22-04288" class="html-bibr">24</a>], copyright 2021 American Chemical Society and (<b>d</b>–<b>f</b>) adapted with permission from ref. [<a href="#B25-sensors-22-04288" class="html-bibr">25</a>] copyright 2016 Wiley-VCH Verlag GmbH &amp; Co.). (<b>a</b>) Schematic circuit diagram of the pressure-based immunoassay platform; flexible pressure sensor by catalytic reaction and immunoreaction (red box) and voltage-regulated ECD as a visualized readout (blue box). (<b>b</b>) Color switching time of the ECD. (<b>c</b>) Pressure response of the designed skin-inspired pressure sensor. (<b>d</b>) Operating mechanism of the paper-based ECD is incorporated with a pressure sensor (Rsensor). The paper-based readout system consists of resistive graphite separators and gold nanoparticle segments (readout) with Prussian blue/polyaniline as electrochromic materials. (<b>e</b>) Visual readout of pressure vs. voltage by a gradual color change of the gold nanoparticle segments. (<b>f</b>) A pressure readout system was applied to the bandage at the ankle.</p>
Full article ">Figure 3
<p>Electrochromic integrated strain sensor for visualization ((<b>a</b>–<b>d</b>) adapted with permission from ref. [<a href="#B30-sensors-22-04288" class="html-bibr">30</a>], copyright 2017 The Royal Society of Chemistry, and (<b>e</b>–<b>j</b>) adapted with permission from ref. [<a href="#B31-sensors-22-04288" class="html-bibr">31</a>], copyright 2022 Elsevier). (<b>a</b>) Schematic and circuit diagram of an interactive color-changeable platform with an ECD integrated strain sensor. Strain sensors are composed of the PVA/MWCNT/PEDOT:PSS on a PDMS substrate with a transmittance spectrum (red box). ECD consists of a polyaniline nanofiber/electrolyte/V<sub>2</sub>O<sub>5</sub> with an ITO-coated PET film as an electrode, displaying a color change from yellow to dark upon application of a voltage (blue box). (<b>b</b>) Transmittance change under varied applied strain (0, 10, 20, and 30%). (<b>c</b>) Current-voltage curves of the strain sensor under various strain (10, 15, 20, and 30%). (<b>d</b>) Photograph of the ECD integrated strain sensor with finger motions. (<b>e</b>) Skin-attachable ECD array with strain and temperature sensor integration (<b>left</b>) and a stretchable array of ECDs (<b>right</b>). (<b>f</b>) Transmission change of the designed ECD under bias voltages at 499 nm. (<b>g</b>) Normalized current (I/I<sub>o</sub>) variation by finger touch. Visual information of wrist bend and skin temperature from the skin-attached array of ECD color patterns varying under the applied strain (ε) and temperature (T): (<b>h</b>) T = 33.9 °C; ε = 0%, (<b>i</b>) T = 33.9 °C; ε = 21.1%, (<b>j</b>) T = 40.5 °C; ε = 31.3%.</p>
Full article ">Figure 4
<p>Visual biosensing system based on the EC readout ((<b>a</b>–<b>c</b>) adapted with permission from ref. [<a href="#B48-sensors-22-04288" class="html-bibr">48</a>], copyright 2018 American Chemical Society, (<b>d</b>,<b>e</b>) adapted with permission from ref. [<a href="#B49-sensors-22-04288" class="html-bibr">49</a>], copyright 2022 Elsevier, and (<b>f</b>,<b>g</b>) adapted with permission from ref. [<a href="#B50-sensors-22-04288" class="html-bibr">50</a>], copyright 2019 American Chemical Society). (<b>a</b>) Schematic of the IrO<sub>x</sub> NPs electrochromism induced by the change in resistance in molecularly imprinted polymer due to chlorpyrifos analyte. (<b>b</b>) Visual detection of chlorpyrifos under varied oxidation potentials and chlorpyrifos concentrations. (<b>c</b>) Charge (mC) change of the IrO<sub>x</sub> NPs during 100 s of oxidation (red), reduction (blue), and redox cycles with a switch of 1 s (purple) and 10 s (green). (<b>d</b>) The schematic and operating mechanism of the paper-based electrochromic glucose sensor. (<b>e</b>) Photographs of the working, reference, and counter electrodes (from left). EC behavior of the deposited PANI under different potentials (blue box). (<b>f</b>) Working principle of the enzymatic self-powered biosensor (ESPB) in formaldehyde detection. (<b>g</b>) Short-circuit current of the ESPB by successive addition of 0.1 mM acetaldehyde, 0.1 mM ethanol, and 0.1 mM formaldehyde, which indicates selectivity toward formaldehyde.</p>
Full article ">Figure 5
<p>EC energy storage devices with the capability of visual charge level inspection ((<b>a</b>–<b>e</b>) adapted with permission ref. [<a href="#B6-sensors-22-04288" class="html-bibr">6</a>], copyright 2019 American Chemical Society, (<b>f</b>–<b>j</b>) adapted with permission ref. [<a href="#B58-sensors-22-04288" class="html-bibr">58</a>], copyright 2018 Wiley-VCH Verlag GmbH &amp; Co.). (<b>a</b>) Operation schematic of an all-transparent stretchable EC supercapacitor (all-TSES). (<b>b</b>) Coloration (discharged state) and bleaching (charged state) of the all-TSES under normal and stretched states. (<b>c</b>) Transmittance change of various WO<sub>3</sub> nanostructures consisting of the all-TSES. (PL: PEDOT: PSS Layer) (<b>d</b>) Galvanostatic charge–discharge (GCD) graphs of the all-TSES with three variations of the nanostructure combination (<b>e</b>) CV result of 20% stretched all-TSES devices. (<b>f</b>) Structural diagram of a wearable EC supercapacitor and its vertical gold nanowire (v-AuNWs) structure evaluated by SEM. (<b>g</b>) GCD curves of a v-AuNW/PANI supercapacitor under varied areal current densities. (<b>h</b>) EC properties of v-AuNW/PANI-based supercapacitor with different charge levels and under conditions required for flexibility. (<b>i</b>) CV curve with the photograph of EC change. (<b>j</b>) CV curves of the v-AuNW/PANI supercapacitor comparing the dynamic condition with the static state.</p>
Full article ">Figure 6
<p>Gas sensor with chromic characterization ((<b>a</b>,<b>b</b>) adapted with permission from ref. [<a href="#B62-sensors-22-04288" class="html-bibr">62</a>], copyright 2017 Elsevier, (<b>c</b>) adapted with permission from ref. [<a href="#B63-sensors-22-04288" class="html-bibr">63</a>], copyright 2017 IOP Publishing, (<b>d</b>,<b>e</b>) adapted with permission from ref. [<a href="#B64-sensors-22-04288" class="html-bibr">64</a>], copyright 2022 Springer, (<b>f</b>–<b>i</b>) adapted with permission from ref. [<a href="#B65-sensors-22-04288" class="html-bibr">65</a>], copyright 2021 American Chemical Society). (<b>a</b>) Photographic images of Pd–WO<sub>3</sub> films deposited on a flexible substrate for H<sub>2</sub> detection: (<b>left</b>) pristine Pd-WO<sub>3</sub>, (<b>right</b>) color-changed Pd-WO<sub>3</sub> under 1% H<sub>2</sub> gas. (<b>b</b>) Photographic images of the installed Pd-WO<sub>3</sub> films on a gas pipe carrying pure H<sub>2</sub> before and after injecting H<sub>2</sub>. (<b>c</b>) Configuration of a gasochromic smart window. A large-scale gasochromic smart window (<b>d</b>) under colored and (<b>e</b>) bleached states. The area of the smart window was 1.3 × 0.8 m<sup>2</sup>. (<b>f</b>) Transmittance change of the WO<sub>3</sub>-SiO<sub>2</sub> gasochromic films in the colored and bleached states. (<b>g</b>) Photographs of WO<sub>3</sub>-SiO<sub>2</sub> in colored and bleached states. A schematic illustration of the assembled window is shown below. (<b>h</b>) Schematic of the insulated box for sunlight absorbance measurement. (<b>i</b>) Heating curve of the sunlight absorbance test.</p>
Full article ">Figure 7
<p>The gasochromic mechanisms of transition metal oxides with noble metal catalysts ((<b>a</b>) adapted with permission from ref. [<a href="#B74-sensors-22-04288" class="html-bibr">74</a>], copyright 2018 Elsevier, (<b>b</b>) adapted with permission from ref. [<a href="#B76-sensors-22-04288" class="html-bibr">76</a>], copyright 2014 Elsevier, (<b>c</b>) adapted with permission from ref. [<a href="#B77-sensors-22-04288" class="html-bibr">77</a>], copyright 2017 Elsevier, (<b>d</b>–<b>f</b>) adapted with permission from ref. [<a href="#B83-sensors-22-04288" class="html-bibr">83</a>], copyright 2021 Elsevier, and (<b>g</b>–<b>i</b>) adapted with permission from ref. [<a href="#B85-sensors-22-04288" class="html-bibr">85</a>], copyright 2022 Springer). The schematic of the mechanism for (<b>a</b>) a gasochromic PdCl<sub>2</sub>-WO<sub>3</sub> nanofiber and (<b>b</b>) Pd-WO<sub>3</sub> film with H<sub>2</sub> molecule. (<b>c</b>) XPS spectra of the Pt/Ni/Pt-0.5/MoO<sub>3</sub> film before, during, and after H<sub>2</sub> exposure. (<b>d</b>) Visible spectra transmittance of the pristine WO<sub>3</sub> (black line) and H<sub>2</sub> exposure states. (redline) (<b>e</b>) The illustration of the H<sub>2</sub> sensing mechanism for the Pt/Ni/Pt-0.5/MoO<sub>3</sub> film. (<b>f</b>) Suggested H<sub>2</sub> sensing mechanism for the Pt/Ni/Pt-MoO<sub>3</sub> composite. (<b>g</b>) Schematic of the porous WO<sub>3</sub> fabrication by the polystyrene template method. (<b>h</b>) Conceptual comparison between dense WO<sub>3</sub> and porous WO<sub>3</sub> films with H<sub>2</sub>. (<b>i</b>) SEM image of the fabricated porous WO<sub>3</sub> film of 360 nm diameter.</p>
Full article ">Figure 8
<p>Ion responsive chromic device ((<b>a</b>–<b>c</b>) adapted with permission from ref. [<a href="#B102-sensors-22-04288" class="html-bibr">102</a>], copyright 2014 The Royal Society of Chemistry, (<b>d</b>) adapted with permission from ref. [<a href="#B103-sensors-22-04288" class="html-bibr">103</a>], copyright 2018 Elsevier, (<b>e</b>) adapted with permission from ref. [<a href="#B105-sensors-22-04288" class="html-bibr">105</a>], copyright 2014 The Royal Society of Chemistry, and (<b>f</b>,<b>g</b>) adapted with permission from ref. [<a href="#B106-sensors-22-04288" class="html-bibr">106</a>], copyright 2011 The Royal Society of Chemistry). (<b>a</b>) Gradual colorimetric change of the PANI-LB under Hg<sup>2+</sup> exposure in an aqueous solution. (<b>b</b>) Selectivity test conducted by exposing the PANI-LB to 5 μM of various metal ion solutions. (<b>c</b>) Reflectance in visible spectra with increasing concentration of Hg<sup>2+</sup>. (<b>d</b>) Photographs of color-changed hydrogels exposed to various transition metal ions. Swelling and deswelling of the gels can be detected in comparison with the dash lines corresponding to 3 × 3 cm<sup>2</sup>. (<b>e</b>) Images of color gradient and UV-vis absorbance spectra of SiO<sub>2</sub> and DPC-doped fibrous films under different Cd<sup>2+</sup> ion concentrations. (<b>f</b>) Photographs and (<b>g</b>) reflectance spectra displaying the selectivity of the fabricated membranes in various metal ions with the concentration of 1 ppm.</p>
Full article ">Figure 9
<p>(<b>a</b>) Schematic illustration of the self-powered finger-motion-sensing display (SMSD) based on an IHN-BCP film on an ionic gel electrode, and touchless motion sensing showing the color change in the IHN-BCP layer. (<b>b</b>) The photograph of 4 × 4 arrays of the SMSDs on a flexible 20 × 20 cm<sup>2</sup> substrate. (<b>c</b>) The voltage signals of the SMSD panel according to the hand motion. (<b>d</b>) Two-dimensional contour plot mapping of the voltage signal. (<b>e</b>) Images of the structural color of the IHN-BCP from touchless hand motion. (<b>a</b>–<b>e</b>) adapted with permission from ref. [<a href="#B111-sensors-22-04288" class="html-bibr">111</a>], copyright 2022 Elsevier.</p>
Full article ">Figure 10
<p>Multi-stimuli-responsive chromic device with UV exposure and temperature ((<b>a</b>–<b>d</b>) adapted with permission from ref. [<a href="#B113-sensors-22-04288" class="html-bibr">113</a>], copyright 2016 Wiley-VCH Verlag GmbH &amp; Co., (<b>e</b>–<b>h</b>) adapted with permission from ref. [<a href="#B116-sensors-22-04288" class="html-bibr">116</a>], copyright 2021 Wiley-VCH Verlag GmbH &amp; Co, (<b>i</b>,<b>j</b>) adapted with permission from ref. [<a href="#B117-sensors-22-04288" class="html-bibr">117</a>], copyright 2021 Springer, and (<b>k</b>) adapted with permission from ref. [<a href="#B120-sensors-22-04288" class="html-bibr">120</a>], copyright 2021 American Chemical Society). (<b>a</b>) Schematic illustrations of the various functional layers in the multi-stimuli chromic device. (<b>b</b>) Schematic illustrations of the near-field communication electronics with a temperature sensor. (<b>c</b>) Image of the fully integrated multifunctional chromic device. (<b>d</b>) The chemical response of (4-phenoxyphenyl) diphenylsulfonium triflate with crystal violet lactone and Congo red for sensing in the UV-A and UV-B bands, respectively. (<b>e</b>) Schematic and pattern of the leather-based multi-stimuli device. (<b>f</b>) Thermochromic, photochromic, electrochromic demonstration of the leather-based multi-stimuli device. (<b>g</b>) Reflectance spectra of the photochromic pigment with 365 nm UV exposure (red line) and without UV exposure (black line). (<b>h</b>) Reflectance spectra of the thermochromic pigment around the transition temperature (31 °C). (<b>i</b>) Color change of the chromic fibers depending on the UV intensities. (<b>j</b>) Photographs of a three-line striped textile showing different colors and monitoring different ambient UV indices and temperatures. (<b>k</b>) Multi-stimuli-responsive microfluidic device based on NO<sub>2</sub>BIPS@IG.</p>
Full article ">Figure 11
<p>Multi-stimuli-responsive chromic device with a mechanical–thermal response ((<b>a</b>–<b>e</b>) adapted with permission from ref. [<a href="#B121-sensors-22-04288" class="html-bibr">121</a>], copyright 2020 Elsevier and (<b>f</b>,<b>g</b>) adapted with permission from ref. [<a href="#B124-sensors-22-04288" class="html-bibr">124</a>], copyright 2020 The Royal Society of Chemistry). (<b>a</b>) Schematic of four different window modes for the dispersed VO<sub>2</sub> film in the PVA–PDMS bilayer structures; (<b>i</b>) normal, (<b>ii</b>) privacy, (<b>iii</b>) energy-saving, and (<b>vi</b>) simultaneous energy saving and privacy mode. (<b>b</b>) Demonstration photographs of the multi-stimuli film with (1) no strain, (2) 75 % strain, and (3) temperature of 95 °C. (<b>c</b>) Illustration of the VO<sub>2</sub> phase transition between the high-temperature rutile phase and low-temperature monoclinic phase. Transmittance spectra of the dispersed VO<sub>2</sub> film in the PVA–PDMS bilayer structures under privacy mode with (<b>d</b>) mechanical response and (<b>e</b>) thermal response. (<b>f</b>) Schematic and images of the wearable mechanothermal EC device on the dorsal side of a finger. (<b>g</b>) Structure and performance of the thermal mapping unit.</p>
Full article ">Scheme 1
<p>Schematic illustration of the chromic-system (visual indicator)-integrated sensors and devices.</p>
Full article ">
13 pages, 4226 KiB  
Article
A Cantilever Beam-Based Triboelectric Nanogenerator as a Drill Pipe Transverse Vibration Energy Harvester Powering Intelligent Exploitation System
by Zhenhui Lian, Qunyi Wang, Chuanqing Zhu, Cong Zhao, Qiang Zhao, Yan Wang, Zhiyuan Hu, Ruijiang Xu, Yukai Lin, Tianyu Chen, Xiangyu Liu, Xiaoyan Xu, Ling Liu, Xiu Xiao and Minyi Xu
Sensors 2022, 22(11), 4287; https://doi.org/10.3390/s22114287 - 4 Jun 2022
Cited by 7 | Viewed by 2423
Abstract
Measurement While Drilling (MWD) is the most commonly used real-time information acquisition technique in offshore intelligent drilling, its power supply has always been a concern. Triboelectric nanogenerators have been shown to harvest low-frequency vibrational energy in the environment and convert it into electricity [...] Read more.
Measurement While Drilling (MWD) is the most commonly used real-time information acquisition technique in offshore intelligent drilling, its power supply has always been a concern. Triboelectric nanogenerators have been shown to harvest low-frequency vibrational energy in the environment and convert it into electricity to power small sensors and electrical devices. This work proposed a cantilever-beam-based triboelectric nanogenerator (CB-TENG) for transverse vibration energy harvesting of a drill pipe. The CB-TENG consists of two vibrators composed of spring steel with PTFE attached and Al electrodes. The structurally optimized CB-TENG can output a peak power of 2.56 mW under the vibration condition of f = 3.0 Hz and A = 50 mm, and the electrical output can be further enhanced with the increased vibration parameters. An array-type vibration energy harvester integrated with eight CB-TENGs is designed to fully adapt to the interior of the drill pipe and improve output performance. The device can realize omnidirectional vibration energy harvesting in the two-dimensional plane with good robustness. Under the typical vibration condition, the short-circuit current and the peak power can reach 49.85 μA and 30.95 mW, respectively. Finally, a series of demonstration experiments have been carried out, indicating the application prospects of the device. Full article
(This article belongs to the Special Issue Advanced Sensing Technologies for Marine Intelligent Systems)
Show Figures

Figure 1

Figure 1
<p>The concept diagram of this work.</p>
Full article ">Figure 2
<p>Application scenario and structure of CB-TENG device (<b>a</b>) Application of CB-TENG in vibration energy collection of drill pipe for offshore oil extraction; (<b>b</b>) the structure of array-type CB-TENG; (<b>c</b>) the composition of CB-TENG; (<b>d</b>) the partial enlargement of the tip of CB-TENG; (<b>e</b>) SEM image of PTFE surface (Not sanded); (<b>f</b>) SEM image of sanded PTFE surface.</p>
Full article ">Figure 3
<p>(<b>a</b>) Working principle of CB-TENG; (<b>b</b>) diagram of CB-TENG voltage output under different backplane structures and different number of vibrators; (<b>c</b>) diagram of CB-TENG voltage output at <span class="html-italic">f</span> = 3.0 Hz, <span class="html-italic">A</span> = 50 mm with different thickness of spring steel; (<b>dI</b>) force analysis diagram of unit length vibrator section; (<b>dII</b>) schematic diagram of single vibrator overall force.</p>
Full article ">Figure 4
<p>The performance of CB-TENG under different vibration parameters. (<b>a</b>) 3D contour of short-circuit current variation with the vibration amplitude and frequency; (<b>b</b>) 3D contour of open-circuit voltage variation with the vibration amplitude and frequency; (<b>c</b>) 3D contour of transferred charge variation with the vibration amplitude and frequency; (<b>d</b>) the short-circuit current variation with the vibration amplitude; (<b>e</b>) the short-circuit current variation with the vibration frequency; (<b>f</b>) the transferred charge variation with the vibration amplitude; the percentage of (<b>g</b>) open-circuit voltage and (<b>h</b>) transferred charge varying with azimuth; (<b>i</b>) dependence of the voltage and output power density on the external load resistance for the CB-TENG working at <span class="html-italic">f</span> = 3 Hz, <span class="html-italic">A</span> = 50 mm.</p>
Full article ">Figure 5
<p>The output performance of the array-type CB-TENG for vibration energy harvesting: (<b>a</b>) Array-type CB-TENG layout diagram; (<b>b</b>) the working circuit of array-type CB-TENG for vibration energy harvesting to power sensor or testing; (<b>c</b>) transferred charge for each TENG unit in the CB- array-type TENG; (<b>d</b>) charging performances to a capacitor of 10 μF for different CB-TENG arrays; (<b>e</b>) open-circuit voltage and short-circuit current with different amounts of integrated units; (<b>f</b>) the output power and the external load resistance with different amounts of integrated units working at <span class="html-italic">f</span> = 3 Hz, <span class="html-italic">A</span> = 50 mm.</p>
Full article ">Figure 6
<p>Demonstration applications: (<b>a</b>) Voltage of different capacitors (C = 10, 22, 33, 47, 100, and 220 µF) charged by array-type CB-TENG at <span class="html-italic">f</span> = 3 Hz, <span class="html-italic">A</span> = 50 mm; (<b>b</b>) voltage of the same capacitor (C = 33 µF) charged by array-type CB-TENG at different vibration frequency and the same vibration amplitude of 50 mm; (<b>c</b>) sensitivity of CB-TENG to relative humidity; (<b>d</b>) durability of the CB-TENG; (<b>e</b>) the array-type CB-TENG lighting 204 LEDs; (<b>f</b>) powering a temperature sensor with array-type CB-TENG.</p>
Full article ">
15 pages, 24308 KiB  
Article
Proposal of an Alpine Skiing Kinematic Analysis with the Aid of Miniaturized Monitoring Sensors, a Pilot Study
by Caterina Russo, Elena Puppo, Stefania Roati and Aurelio Somà
Sensors 2022, 22(11), 4286; https://doi.org/10.3390/s22114286 - 4 Jun 2022
Cited by 6 | Viewed by 3127
Abstract
The recent growth and spread of smart sensor technologies make these connected devices suitable for diagnostic and monitoring in different fields. In particular, these sensors are useful in diagnostics for control of diseases or during rehabilitation. They are also extensively used in the [...] Read more.
The recent growth and spread of smart sensor technologies make these connected devices suitable for diagnostic and monitoring in different fields. In particular, these sensors are useful in diagnostics for control of diseases or during rehabilitation. They are also extensively used in the monitoring field, both by non-expert and expert users, to monitor health status and progress during a sports activity. For athletes, these devices could be used to control and enhance their performance. This development has led to the realization of miniaturized sensors that are wearable during different sporting activities without interfering with the movements of the athlete. The use of these sensors, during training or racing, opens new frontiers for the understanding of motions and causes of injuries. This pilot study introduced a motion analysis system to monitor Alpine ski activities during training sessions. Through five inertial measurement units (IMUs), placed on five points of the athletes, it is possible to compute the angle of each joint and evaluate the ski run. Comparing the IMU data, firstly, with a video and then proposing them to an expert coach, it is possible to observe from the data the same mistakes visible in the camera. The aim of this work is to find a tool to support ski coaches during training sessions. Since the evaluation of athletes is now mainly developed with the support of video, we evaluate the use of IMUs to support the evaluation of the coach with more precise data. Full article
(This article belongs to the Special Issue Sensor Technology for Sports Monitoring)
Show Figures

Figure 1

Figure 1
<p>Sensors reference system.</p>
Full article ">Figure 2
<p>Local reference frame for each monitored part: boot cuff, lower trunk and poles.</p>
Full article ">Figure 3
<p>Lateral inclination for boot cuff and lower trunk.</p>
Full article ">Figure 4
<p>Pole roll, yaw and pitch angles.</p>
Full article ">Figure 5
<p>Followed algorithm for the data analysis.</p>
Full article ">Figure 6
<p>Pole acceleration along x axis.</p>
Full article ">Figure 7
<p>Highlight of one turn in the roll angle graph.</p>
Full article ">Figure 8
<p>Number of turns in video and roll angle.</p>
Full article ">Figure 9
<p>Roll angle for the ski boots and for the back.</p>
Full article ">Figure 10
<p>Yaw angle for the ski boots and for the back.</p>
Full article ">Figure 11
<p>Roll, yaw and pitch angles for poles.</p>
Full article ">Figure 12
<p>Pitch angle for right and left pole for Testers 1 and 2.</p>
Full article ">Figure 13
<p>Slope inclination measures.</p>
Full article ">Figure 14
<p>Comparing the video roll angles with the roll angles computed with the IMU.</p>
Full article ">
21 pages, 11767 KiB  
Article
Crosstalk Correction for Color Filter Array Image Sensors Based on Lp-Regularized Multi-Channel Deconvolution
by Jonghyun Kim, Kyeonghoon Jeong and Moon Gi Kang
Sensors 2022, 22(11), 4285; https://doi.org/10.3390/s22114285 - 4 Jun 2022
Cited by 1 | Viewed by 5020
Abstract
In this paper, we propose a crosstalk correction method for color filter array (CFA) image sensors based on Lp-regularized multi-channel deconvolution. Most imaging systems with CFA exhibit a crosstalk phenomenon caused by the physical limitations of the image sensor. In general, [...] Read more.
In this paper, we propose a crosstalk correction method for color filter array (CFA) image sensors based on Lp-regularized multi-channel deconvolution. Most imaging systems with CFA exhibit a crosstalk phenomenon caused by the physical limitations of the image sensor. In general, this phenomenon produces both color degradation and spatial degradation, which are respectively called desaturation and blurring. To improve the color fidelity and the spatial resolution in crosstalk correction, the feasible solution of the ill-posed problem is regularized by image priors. First, the crosstalk problem with complex spatial and spectral degradation is formulated as a multi-channel degradation model. An objective function with a hyper-Laplacian prior is then designed for crosstalk correction. This approach enables the simultaneous improvement of the color fidelity and the sharpness restoration of the details without noise amplification. Furthermore, an efficient solver minimizes the objective function for crosstalk correction consisting of Lp regularization terms. The proposed method was verified on synthetic datasets according to various crosstalk and noise levels. Experimental results demonstrated that the proposed method outperforms the conventional methods in terms of the color peak signal-to-noise ratio and structural similarity index measure. Full article
(This article belongs to the Collection Computational Imaging and Sensing)
Show Figures

Figure 1

Figure 1
<p>Problems of imaging under the crosstalk condition in CFA: (<b>a</b>) Original image; (<b>b</b>) degraded image by Gaussian kernel with <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mi>g</mi> </msub> <mo>=</mo> <mn>0.45</mn> </mrow> </semantics></math> and noise with <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mi>n</mi> </msub> <mo>=</mo> <mn>0.01</mn> </mrow> </semantics></math>; (<b>c</b>) degraded image by Gaussian kernel with <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mi>g</mi> </msub> <mo>=</mo> <mn>0.6</mn> </mrow> </semantics></math> and noise with <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mi>n</mi> </msub> <mo>=</mo> <mn>0.01</mn> </mrow> </semantics></math>. As the standard deviation <math display="inline"><semantics> <msub> <mi>σ</mi> <mi>g</mi> </msub> </semantics></math> of the Gaussian kernel increases, desaturation and blurring intensify owing to interference with neighboring channels.</p>
Full article ">Figure 2
<p>Bayer CFA and characteristics of crosstalk phenomenon: (<b>a</b>) Bayer CFA structure. The Bayer CFA consists of three filters for respective R, G, and B channels. (<b>b</b>) Schematic image of crosstalk inside the imaging sensor. (<b>c</b>) Example of the spectral sensitivity of the camera imaging system in a crosstalk-free condition (solid lines) and in a crosstalk condition (dashed lines). The spectral sensitivity shift is caused by crosstalk inside the imaging sensor.</p>
Full article ">Figure 3
<p>Crosstalk kernels at G location: (<b>a</b>) Gaussian kernel of <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mi>g</mi> </msub> <mo>=</mo> <mn>0.45</mn> </mrow> </semantics></math> and (<b>b</b>) Gaussian kernel of <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mi>g</mi> </msub> <mo>=</mo> <mn>0.60</mn> </mrow> </semantics></math>. Owing to the crosstalk kernel, spectral and spatial degradation simultaneously occur.</p>
Full article ">Figure 4
<p>Example illustrations of single-channel and multi-channel degradation models: (<b>a</b>) Relationship of <math display="inline"><semantics> <mi mathvariant="bold">B</mi> </semantics></math> in the single-channel degradation model (left) and <math display="inline"><semantics> <mi mathvariant="bold">H</mi> </semantics></math> in the multi-channel degradation model (right). (<b>b</b>) Comparison of the single-channel degradation model (top) and the multi-channel degradation model (bottom). ARI demosaicing is applied equally to degraded images. The same degraded results are produced by different formulations of the crosstalk phenomenon, but different results are restored.</p>
Full article ">Figure 5
<p>Probability distributions from Kodak and McMaster datasets: (<b>a</b>) Distribution of image gradients and (<b>b</b>) distribution of color difference gradients. The empirical distributions of image gradients and color difference gradients follow the probability distributions of <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>0.66</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mn>0.62</mn> </mrow> </semantics></math>, respectively.</p>
Full article ">Figure 6
<p>Visual comparison of restored images, enlarged parts, and difference maps from <span class="html-italic">crosstalk degradation 1</span> (crosstalk kernel with <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mi>g</mi> </msub> <mo>=</mo> <mn>0.45</mn> </mrow> </semantics></math> and noise with <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mi>n</mi> </msub> <mo>=</mo> <mn>0.01</mn> </mrow> </semantics></math>): (<b>a</b>) Ground truth, (<b>b</b>) CM1, (<b>c</b>) CM2, (<b>d</b>) CM3, (<b>e</b>) CM4, (<b>f</b>) CM6, and (<b>g</b>) PM. (<b>h</b>–<b>n</b>) Same methods as in (<b>a</b>–<b>g</b>). The first and second rows present the results of <span class="html-italic">Bike</span> and <span class="html-italic">Parrot</span> in the Kodak dataset, respectively.</p>
Full article ">Figure 7
<p>Visual comparison of restored images, enlarged parts, and difference maps from <span class="html-italic">crosstalk degradation 2</span> (crosstalk kernel with <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mi>g</mi> </msub> <mo>=</mo> <mn>0.6</mn> </mrow> </semantics></math> and noise with <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mi>n</mi> </msub> <mo>=</mo> <mn>0.04</mn> </mrow> </semantics></math>): (<b>a</b>) Ground truth, (<b>b</b>) CM1, (<b>c</b>) CM2, (<b>d</b>) CM3, (<b>e</b>) CM4, (<b>f</b>) CM6, and (<b>g</b>) PM. (<b>h</b>–<b>n</b>) Same methods as in (<b>a</b>–<b>g</b>). The first and second rows present the results of <span class="html-italic">Bike</span> and <span class="html-italic">Parrot</span> in the Kodak dataset, respectively.</p>
Full article ">Figure 8
<p>Visual comparison of restored images, enlarged parts, and difference maps from <span class="html-italic">crosstalk degradation 2</span> (crosstalk kernel with <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mi>g</mi> </msub> <mo>=</mo> <mn>0.6</mn> </mrow> </semantics></math> and noise with <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mi>n</mi> </msub> <mo>=</mo> <mn>0.04</mn> </mrow> </semantics></math>): (<b>a</b>) Ground truth, (<b>b</b>) CM1, (<b>c</b>) CM2, (<b>d</b>) CM3, (<b>e</b>) CM4, (<b>f</b>) CM6, and (<b>g</b>) PM. (<b>h</b>–<b>n</b>) Same methods as in (<b>a</b>–<b>g</b>). The first and second rows present the results of <span class="html-italic">mcm1</span> and <span class="html-italic">mcm4</span> in the McMaster dataset, respectively.</p>
Full article ">Figure 9
<p>Influence of regularization parameter <math display="inline"><semantics> <msub> <mi>λ</mi> <mn>1</mn> </msub> </semantics></math> for test image <span class="html-italic">House</span> for <span class="html-italic">crosstalk degradation 2</span> (crosstalk kernel with <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mi>g</mi> </msub> <mo>=</mo> <mn>0.6</mn> </mrow> </semantics></math> and noise with <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mi>n</mi> </msub> <mo>=</mo> <mn>0.04</mn> </mrow> </semantics></math>): (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.01</mn> </mrow> </semantics></math>, (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.03</mn> </mrow> </semantics></math>, (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.05</mn> </mrow> </semantics></math>, and (<b>d</b>) <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 10
<p>Influence of regularization parameter <math display="inline"><semantics> <msub> <mi>λ</mi> <mn>2</mn> </msub> </semantics></math> for test image <span class="html-italic">Parrot</span> for <span class="html-italic">crosstalk degradation 2</span> (crosstalk kernel with <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mi>g</mi> </msub> <mo>=</mo> <mn>0.6</mn> </mrow> </semantics></math> and noise with <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mi>n</mi> </msub> <mo>=</mo> <mn>0.04</mn> </mrow> </semantics></math>): (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.0001</mn> </mrow> </semantics></math>, (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.001</mn> </mrow> </semantics></math>, (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.01</mn> </mrow> </semantics></math>, and (<b>d</b>) <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 11
<p>Objective performance comparison for test image <span class="html-italic">Parrot</span> according to regularization parameters <math display="inline"><semantics> <msub> <mi>λ</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>λ</mi> <mn>2</mn> </msub> </semantics></math>: (<b>a</b>) CPSNR values versus values of <math display="inline"><semantics> <msub> <mi>λ</mi> <mn>1</mn> </msub> </semantics></math>; (<b>b</b>) SSIM values versus values of <math display="inline"><semantics> <msub> <mi>λ</mi> <mn>1</mn> </msub> </semantics></math>; (<b>c</b>) CPSNR values versus values of <math display="inline"><semantics> <msub> <mi>λ</mi> <mn>2</mn> </msub> </semantics></math>; and (<b>d</b>) SSIM values versus values of <math display="inline"><semantics> <msub> <mi>λ</mi> <mn>2</mn> </msub> </semantics></math>.</p>
Full article ">Figure 12
<p>Convergence for test image <span class="html-italic">Parrot</span>: (<b>a</b>) Value of cost function versus iteration number of various initial conditions for <span class="html-italic">crosstalk degradation 1</span> (crosstalk kernel with <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mi>g</mi> </msub> <mo>=</mo> <mn>0.45</mn> </mrow> </semantics></math> and noise with <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mi>n</mi> </msub> <mo>=</mo> <mn>0.01</mn> </mrow> </semantics></math>); (<b>b</b>) value of cost function versus iteration number of various degradation conditions; and (<b>c</b>) visualization of residual image versus iteration number at <span class="html-italic">crosstalk degradation 2</span> (crosstalk kernel with <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mi>g</mi> </msub> <mo>=</mo> <mn>0.6</mn> </mrow> </semantics></math> and noise with <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mi>n</mi> </msub> <mo>=</mo> <mn>0.04</mn> </mrow> </semantics></math>).</p>
Full article ">
15 pages, 8736 KiB  
Article
Pseudo-Static Gain Cell of Embedded DRAM for Processing-in-Memory in Intelligent IoT Sensor Nodes
by Subin Kim and Jun-Eun Park
Sensors 2022, 22(11), 4284; https://doi.org/10.3390/s22114284 - 4 Jun 2022
Cited by 2 | Viewed by 3352
Abstract
This paper presents a pseudo-static gain cell (PS-GC) with extended retention time for an embedded dynamic random-access memory (eDRAM) macro for analog processing-in-memory (PIM). The proposed eDRAM cell consists of a two-transistor (2T) gain cell with a pseudo-static leakage compensation that maintains stored [...] Read more.
This paper presents a pseudo-static gain cell (PS-GC) with extended retention time for an embedded dynamic random-access memory (eDRAM) macro for analog processing-in-memory (PIM). The proposed eDRAM cell consists of a two-transistor (2T) gain cell with a pseudo-static leakage compensation that maintains stored data without charge loss issue. Hence, the PS-GC can offer unlimited retention time in the same manner as static RAM (SRAM). Due to the extended retention time, bulky capacitors in conventional eDRAM are no longer needed, thereby, improving the area efficiency of eDRAM-based analog PIMs. The active leakage compensation of the PS-GC can effectively hold stored data even in a deep-submicron process that show significant leakage current. Therefore, the PS-GC can accelerate write-access time and read-access time without concern of increased leakage current. The proposed gain cell and its 64 × 64 eDRAM macro were implemented in a 28 nm CMOS process. The bitcell of the proposed gain cell has 0.79- and 0.58-times the area of those of 6T SRAM and 8T STAM, respectively. The post-layout simulation results demonstrate that the eDRAM maintains the pseudo-static operation with unlimited retention time successfully under wide range variations of process, voltage and temperature. At the operating frequency of 667 MHz, the eDRAM macro achieved an operating voltage range from 0.9 to 1.2 V and operating temperature range from −25 to 85 °C regardless of the process variation. The post-layout simulated write-access time and read-access time were below 0.3 ns at an operating temperature of 85 °C. The PS-GC consumes a static power of 2.2 nW/bit at an operating temperature of 25 °C. Full article
(This article belongs to the Special Issue Intelligent IoT Circuits and Systems)
Show Figures

Figure 1

Figure 1
<p>Conceptual block diagram of eDRAM-based PIM used for intelligent IoT sensor nodes.</p>
Full article ">Figure 2
<p>Retention time of 2T gain cell implemented in 28, 65 and 180 nm processes.</p>
Full article ">Figure 3
<p>Schematic of conventional 2T1C gain cell.</p>
Full article ">Figure 4
<p>Simulated leakage current after write operation of 2T1C gain cell.</p>
Full article ">Figure 5
<p>Monte Carlo simulation results of storage node (SN) voltage during the data hold mode of the 2T1C gain cell.</p>
Full article ">Figure 6
<p>Schematic and conceptual timing diagram of proposed PS-GC.</p>
Full article ">Figure 7
<p>Leakage compensation of PS-GC when storing data (<b>a</b>) “0” and (<b>b</b>) “1”.</p>
Full article ">Figure 8
<p>Monte Carlo mismatch simulation of data retention after storing data “0” and “1”.</p>
Full article ">Figure 9
<p>Post-layout simulated static current of PS-GC.</p>
Full article ">Figure 10
<p>(<b>a</b>) PIM configuration example of PS-GC eDRAM. (<b>b</b>) RBL discharge plot of all accumulation results.</p>
Full article ">Figure 11
<p>Overall architecture of the 4 kb eDRAM macro.</p>
Full article ">Figure 12
<p>(<b>a</b>) Proposed 4 kb macro layout. (<b>b</b>) Layout comparison of bit cells of eDRAMs and SRAMs.</p>
Full article ">Figure 13
<p>Post-layout simulated write-access times when storing data (<b>a</b>) “0” and (<b>b</b>) “1” across five process corners and four temperature cases.</p>
Full article ">Figure 14
<p>Post-layout simulated access time versus supply voltage with typical (TT 25 °C), best (SF 85 °C) and worst (FS −25 °C) process corners and temperature conditions.</p>
Full article ">Figure 15
<p>Post-layout simulated read-access times (<b>a</b>) depending on process corners and temperatures and (<b>b</b>) across the supply voltage range with typical (TT), best (FF) and worst (SS) process corners.</p>
Full article ">Figure 16
<p>Shmoo plots of post-layout Monte Carlo 1000-trial simulations of the eDRAM with various operating frequencies (100–667 MHz), process corners (SF, TT and FS), temperatures (−25 °C to 85 °C) and supply voltages (0.5–1.2 V).</p>
Full article ">
20 pages, 4369 KiB  
Article
Pixel-Reasoning-Based Robotics Fine Grasping for Novel Objects with Deep EDINet Structure
by Chaoquan Shi, Chunxiao Miao, Xungao Zhong, Xunyu Zhong, Huosheng Hu and Qiang Liu
Sensors 2022, 22(11), 4283; https://doi.org/10.3390/s22114283 - 4 Jun 2022
Cited by 8 | Viewed by 2439
Abstract
Robotics grasp detection has mostly used the extraction of candidate grasping rectangles; those discrete sampling methods are time-consuming and may ignore the potential best grasp synthesis. This paper proposes a new pixel-level grasping detection method on RGB-D images. Firstly, a fine grasping representation [...] Read more.
Robotics grasp detection has mostly used the extraction of candidate grasping rectangles; those discrete sampling methods are time-consuming and may ignore the potential best grasp synthesis. This paper proposes a new pixel-level grasping detection method on RGB-D images. Firstly, a fine grasping representation is introduced to generate the gripper configurations of parallel-jaw, which can effectively resolve the gripper approaching conflicts and improve the applicability to unknown objects in cluttered scenarios. Besides, the adaptive grasping width is used to adaptively represent the grasping attribute, which is fine for objects. Then, the encoder–decoder–inception convolution neural network (EDINet) is proposed to predict the fine grasping configuration. In our findings, EDINet uses encoder, decoder, and inception modules to improve the speed and robustness of pixel-level grasping detection. The proposed EDINet structure was evaluated on the Cornell and Jacquard dataset; our method achieves 98.9% and 96.1% test accuracy, respectively. Finally, we carried out the grasping experiment on the unknown objects, and the results show that the average success rate of our network model is 97.2% in a single object scene and 93.7% in a cluttered scene, which out-performs the state-of-the-art algorithms. In addition, EDINet completes a grasp detection pipeline within only 25 ms. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

Figure 1
<p>The representation of fine grasping in image and robotics workspace.</p>
Full article ">Figure 2
<p>The overview of the robot grasping system.</p>
Full article ">Figure 3
<p>The structure of EDINet: (<b>a</b>) encoder module, (<b>b</b>) decoder module, (<b>c</b>) inception module, (<b>d</b>) up-sampling module.</p>
Full article ">Figure 4
<p>Pixel-level grasping. (<b>a</b>) Multiple grasping rectangles in multiple grasping regions, and the center of grasp rectangle is the local maximum. (<b>b</b>) The pixel point with the global maximal grasp quality is the center of the predicted grasping rectangle.</p>
Full article ">Figure 5
<p>Grasping detection results on the Cornell dataset: (<b>a</b>) the evaluation results using RGB images, (<b>b</b>) the results using depth images, (<b>c</b>) the results using RGB-D image. The blue rectangle refers to the opening width when the gripper approaches the object, and the red “I” represents the closing width when the gripper picks up the object.</p>
Full article ">Figure 6
<p>Grasping detection results on the Jacquard grasp dataset: (<b>a</b>) the results using RGB images, (<b>b</b>) the results using depth images, (<b>c</b>) the results using RGB-D images. The blue rectangle refers to the opening width when the gripper approaches the object, and the red “I” represents the closing width when the gripper picks up the object.</p>
Full article ">Figure 7
<p>Robot close grasping test results. (<b>a</b>) The conventional grasping method directly closing; it easily broke the objects; (<b>b</b>) our grasping method with adaptive closing width, which is fine for objects. The blue rectangle refers to the opening width when the gripper approaches the object, and the red “I” represents the closing width when the gripper picks up the object.</p>
Full article ">Figure 8
<p>Robot open grasping results. (<b>a</b>) Robot failed grasping by the conventional method due to colliding with other objects; (<b>b</b>) robot successful grasping by our method with adaptive opening gripper configurations.</p>
Full article ">Figure 9
<p>Robot grasping experiment on unknown objects: (<b>a</b>) detection and grasping on rigid objects, (<b>b</b>) robot grasping thin and easy deformed objects, (<b>c</b>) robot grasping flexible objects. The blue rectangle refers to the opening width when the gripper approaches the object, and the red “I” represents the closing width when the gripper picks up the object.</p>
Full article ">Figure 10
<p>Robot grasping in different cluttered scenarios: (<b>a</b>) objects detection, (<b>b</b>) adaptive gripper configurations and robot approaching objects, (<b>c</b>) robot grasping the object, (<b>d</b>) robot picking up the object.</p>
Full article ">Figure 11
<p>Examples of failed grasping; the most common failed grasping is that the gripper is blocked by other objects.</p>
Full article ">
25 pages, 9896 KiB  
Article
Enhancing the Sense of Attention from an Assistance Mobile Robot by Improving Eye-Gaze Contact from Its Iconic Face Displayed on a Flat Screen
by Elena Rubies, Jordi Palacín and Eduard Clotet
Sensors 2022, 22(11), 4282; https://doi.org/10.3390/s22114282 - 4 Jun 2022
Cited by 12 | Viewed by 3370
Abstract
One direct way to express the sense of attention in a human interaction is through the gaze. This paper presents the enhancement of the sense of attention from the face of a human-sized mobile robot during an interaction. This mobile robot was designed [...] Read more.
One direct way to express the sense of attention in a human interaction is through the gaze. This paper presents the enhancement of the sense of attention from the face of a human-sized mobile robot during an interaction. This mobile robot was designed as an assistance mobile robot and uses a flat screen at the top of the robot to display an iconic (simplified) face with big round eyes and a single line as a mouth. The implementation of eye-gaze contact from this iconic face is a problem because of the difficulty of simulating real 3D spherical eyes in a 2D image considering the perspective of the person interacting with the mobile robot. The perception of eye-gaze contact has been improved by manually calibrating the gaze of the robot relative to the location of the face of the person interacting with the robot. The sense of attention has been further enhanced by implementing cyclic face explorations with saccades in the gaze and by performing blinking and small movements of the mouth. Full article
Show Figures

Figure 1

Figure 1
<p>Simplified geometric interpretation of the eyes looking at a fixation point, <math display="inline"><semantics> <mi>F</mi> </semantics></math>: (<b>a</b>) side view of the eye model; (<b>b</b>) representation of the plane of sight (<math display="inline"><semantics> <mrow> <mi>P</mi> <mi>l</mi> <mi>a</mi> <mi>n</mi> <mi>e</mi> <mo> </mo> <mi>D</mi> <mi>Y</mi> </mrow> </semantics></math>).</p>
Full article ">Figure 2
<p>Image showing the assistance mobile robot used in this paper: (<b>a</b>) entire robot; (<b>b</b>) side-view detail of the screen used as a head, the coordinate system, the height of the eyes of the robot referred to the ground (<math display="inline"><semantics> <mrow> <msub> <mi>r</mi> <mi>H</mi> </msub> </mrow> </semantics></math>), and the inclination angle of the screen (<math display="inline"><semantics> <mi>α</mi> </semantics></math>).</p>
Full article ">Figure 3
<p>Image and parameters that define the iconic face implemented in the assistance mobile robot.</p>
Full article ">Figure 4
<p>Approximate representation of the field of view of the frontal upper cameras of the APR-02 mobile robot: <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mn>1</mn> </msub> </mrow> </semantics></math> is the RGB-D camera, and <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> </semantics></math> the panoramic RGB camera.</p>
Full article ">Figure 5
<p>Figure representing two typical images provided simultaneously by (<b>a</b>) the upper frontal RGB-D camera (480 × 640 pixels); (<b>b</b>) the RGB panoramic camera of the APR-02 mobile robot (1280 × 1024 pixels). The image shows the mannequin face and two authors of this paper; the rectangles depict the faces detected in the images.</p>
Full article ">Figure 6
<p>Figure showing (<b>a</b>) the representation of a typical depth image provided by the RGB-D camera (240 × 320 pixels); (<b>b</b>) the representation of the XYZC point cloud of the nearest face detected in the RGB image provided by the RGB-D camera (3233 data points). The XYZC point cloud has been analytically computed from the depth and RGB streams.</p>
Full article ">Figure 7
<p>Image of the measurement setup showing the assistance mobile robot, the mannequin head and the camera used to take pictures of the robotic eye-gaze response.</p>
Full article ">Figure 8
<p>Representation of the face square areas identified by the Viola–Jones algorithm [<a href="#B54-sensors-22-04282" class="html-bibr">54</a>] and representation of the average fixed proportions holistically proposed to locate the eyes and mouth in the cases of: (<b>a</b>) human-sized mannequin; (<b>b</b>) user 1; (<b>c</b>) user 1 masked; (<b>d</b>) user 2; (<b>e</b>) user 2 masked.</p>
Full article ">Figure 9
<p>Holistic face proportions proposed in this paper to detect the eyes and mouth in a square image section classified as a face by the Viola–Jones algorithm [<a href="#B54-sensors-22-04282" class="html-bibr">54</a>]. The height of the sight plane of the face detected is labelled as <math display="inline"><semantics> <mrow> <msub> <mi>s</mi> <mi>H</mi> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 10
<p>Representation of the horizontal location of the pupil of the eyes (<math display="inline"><semantics> <mrow> <msub> <mi>D</mi> <mi>L</mi> </msub> </mrow> </semantics></math>,<math display="inline"><semantics> <mrow> <msub> <mi>D</mi> <mi>R</mi> </msub> </mrow> </semantics></math>) that defines the short-range gaze when looking at a face placed at different distances (<math display="inline"><semantics> <mrow> <msub> <mi>F</mi> <mi>X</mi> </msub> <mo>,</mo> <msub> <mi>F</mi> <mi>Y</mi> </msub> </mrow> </semantics></math>) in the case of <math display="inline"><semantics> <mrow> <msub> <mi>F</mi> <mi>Z</mi> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>: (<b>a</b>) right eye gaze implementation; (<b>b</b>) left eye gaze implementation.</p>
Full article ">Figure 11
<p>Representation of the vertical location of the pupil of both eyes (<math display="inline"><semantics> <mi>H</mi> </semantics></math>) that defines the short-range gaze when looking at a face placed at different distances (<math display="inline"><semantics> <mrow> <msub> <mi>F</mi> <mi>X</mi> </msub> <mo>,</mo> <msub> <mi>F</mi> <mi>Z</mi> </msub> </mrow> </semantics></math>) in the case of a face centered in front of the mobile robot (<math display="inline"><semantics> <mrow> <msub> <mi>F</mi> <mi>Y</mi> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>).</p>
Full article ">Figure 12
<p>Spline interpolated representation of the horizontal location of the pupil of the left and right eyes (<math display="inline"><semantics> <mrow> <msub> <mi>D</mi> <mi>L</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>D</mi> <mi>R</mi> </msub> </mrow> </semantics></math>) that defines the long-range gaze when looking at a face placed at different horizontal angular orientations (<math display="inline"><semantics> <mi>φ</mi> </semantics></math>) in the case of <math display="inline"><semantics> <mrow> <msub> <mi>F</mi> <mi>Z</mi> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>F</mi> <mi>φ</mi> </msub> <mo>=</mo> <mn>2.0</mn> </mrow> </semantics></math> m.</p>
Full article ">Figure 13
<p>Spline interpolated representation of the vertical location of the pupil of both eyes (<math display="inline"><semantics> <mi>H</mi> </semantics></math>) that defines the long-range gaze when looking at a face placed at different vertical angular orientations (<math display="inline"><semantics> <mi>θ</mi> </semantics></math>) in the case of <math display="inline"><semantics> <mrow> <msub> <mi>F</mi> <mi>Y</mi> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>F</mi> <mi>φ</mi> </msub> <mo>=</mo> <mn>2.0</mn> </mrow> </semantics></math> m.</p>
Full article ">Figure 14
<p>Representation of the saccade trajectories based on the location of the face (red cross) and the fixation points of the left and right eyes and mouth deduced from the face area detected by the Viola–Jones algorithm [<a href="#B54-sensors-22-04282" class="html-bibr">54</a>]. The circular saccade sequence represented is 1-2-3-4-5, and the basic fixation time interval is 400 ms.</p>
Full article ">Figure 15
<p>Gaze of the robot following a face performing a lateral displacement: (<b>a</b>) face at <math display="inline"><semantics> <mrow> <msub> <mi>F</mi> <mi>Y</mi> </msub> </mrow> </semantics></math> = 0.00 m; (<b>b</b>) face at <math display="inline"><semantics> <mrow> <msub> <mi>F</mi> <mi>Y</mi> </msub> </mrow> </semantics></math> = −0.05 m; (<b>c</b>) face at <math display="inline"><semantics> <mrow> <msub> <mi>F</mi> <mi>Y</mi> </msub> </mrow> </semantics></math> = 0.05 m.</p>
Full article ">Figure 16
<p>Gaze of the robot following a face at different heights: (<b>a</b>) face at <math display="inline"><semantics> <mrow> <msub> <mi>s</mi> <mi>H</mi> </msub> </mrow> </semantics></math> = 1.55 m; (<b>b</b>) face at <math display="inline"><semantics> <mrow> <msub> <mi>s</mi> <mi>H</mi> </msub> </mrow> </semantics></math> = 1.50 m; (<b>c</b>) face at <math display="inline"><semantics> <mrow> <msub> <mi>s</mi> <mi>H</mi> </msub> </mrow> </semantics></math> = 1.60 m.</p>
Full article ">Figure 17
<p>Gaze of the robot looking at a face at different distances: (<b>a</b>) face at <math display="inline"><semantics> <mrow> <msub> <mi>F</mi> <mi>X</mi> </msub> </mrow> </semantics></math> = 0.50 m; (<b>b</b>) face at <math display="inline"><semantics> <mrow> <msub> <mi>F</mi> <mi>X</mi> </msub> </mrow> </semantics></math> = 0.45 m; (<b>c</b>) face at <math display="inline"><semantics> <mrow> <msub> <mi>F</mi> <mi>X</mi> </msub> </mrow> </semantics></math> = 0.55 m.</p>
Full article ">Figure 18
<p>Gaze of the robot at different stages of the cyclic fixation behavior when looking at a face: (<b>a</b>) on the left eye of the user; (<b>b</b>) on the right eye of the user; (<b>c</b>) on the mouth of the user.</p>
Full article ">Figure 19
<p>Example of blinking: (<b>a</b>) normal gaze; (<b>b</b>) closed eyes; (<b>c</b>) half-closed eyes.</p>
Full article ">Figure 20
<p>Example of the mouth variations: (<b>a</b>) neutral mouth expression; (<b>b</b>) attention variation; (<b>c</b>) smiling variation.</p>
Full article ">
17 pages, 7236 KiB  
Article
A Long Short-Term Memory Network for Plasma Diagnosis from Langmuir Probe Data
by Jin Wang, Wenzhu Ji, Qingfu Du, Zanyang Xing, Xinyao Xie and Qinghe Zhang
Sensors 2022, 22(11), 4281; https://doi.org/10.3390/s22114281 - 4 Jun 2022
Cited by 4 | Viewed by 2583
Abstract
Electrostatic probe diagnosis is the main method of plasma diagnosis. However, the traditional diagnosis theory is affected by many factors, and it is difficult to obtain accurate diagnosis results. In this study, a long short-term memory (LSTM) approach is used for plasma probe [...] Read more.
Electrostatic probe diagnosis is the main method of plasma diagnosis. However, the traditional diagnosis theory is affected by many factors, and it is difficult to obtain accurate diagnosis results. In this study, a long short-term memory (LSTM) approach is used for plasma probe diagnosis to derive electron density (Ne) and temperature (Te) more accurately and quickly. The LSTM network uses the data collected by Langmuir probes as input to eliminate the influence of the discharge device on the diagnosis that can be applied to a variety of discharge environments and even space ionospheric diagnosis. In the high-vacuum gas discharge environment, the Langmuir probe is used to obtain current–voltage (I–V) characteristic curves under different Ne and Te. A part of the data input network is selected for training, the other part of the data is used as the test set to test the network, and the parameters are adjusted to make the network obtain better prediction results. Two indexes, namely, mean squared error (MSE) and mean absolute percentage error (MAPE), are evaluated to calculate the prediction accuracy. The results show that using LSTM to diagnose plasma can reduce the impact of probe surface contamination on the traditional diagnosis methods and can accurately diagnose the underdense plasma. In addition, compared with Te, the Ne diagnosis result output by LSTM is more accurate. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Figure 1
<p>A representative I–V characteristic curve.</p>
Full article ">Figure 2
<p>Comparison of I-V characteristic curves collected by contaminated probe and clean probe.</p>
Full article ">Figure 3
<p>The I–V characteristic curve without obvious saturation region.</p>
Full article ">Figure 4
<p>LSTM cell structure.</p>
Full article ">Figure 5
<p>The full flowchart of the LSTM model.</p>
Full article ">Figure 6
<p>Experimental setup.</p>
Full article ">Figure 7
<p>Comparison results of different structures.</p>
Full article ">Figure 8
<p>The electron density (<span class="html-italic">N<sub>e</sub></span>) and electron temperature (<span class="html-italic">T<sub>e</sub></span>) distribution of the data set.</p>
Full article ">Figure 9
<p>The training and prediction results of the <span class="html-italic">N<sub>e</sub></span>. (<b>a</b>) The loss rate of the training set data; (<b>b</b>) The accuracy and loss of the verification set data; (<b>c</b>) Comparison of prediction results.</p>
Full article ">Figure 10
<p>The training and prediction results of the <span class="html-italic">T<sub>e</sub></span>. (<b>a</b>) The loss rate of the training set data; (<b>b</b>) The accuracy and loss rate of the verification set data; (<b>c</b>) Comparison of prediction results.</p>
Full article ">Figure 11
<p>(<b>a</b>) The comparison results of the <span class="html-italic">N<sub>e</sub></span>; (<b>b</b>) The comparison results of the <span class="html-italic">T<sub>e</sub></span>.</p>
Full article ">
24 pages, 891 KiB  
Article
Fault Tolerant DHT-Based Routing in MANET
by Saleem Zahid, Kifayat Ullah, Abdul Waheed, Sadia Basar, Mahdi Zareei and Rajesh Roshan Biswal
Sensors 2022, 22(11), 4280; https://doi.org/10.3390/s22114280 - 3 Jun 2022
Cited by 7 | Viewed by 2337
Abstract
In Distributed Hash Table (DHT)-based Mobile Ad Hoc Networks (MANETs), a logical structured network (i.e., follows a tree, ring, chord, 3D, etc., structure) is built over the ad hoc physical topology in a distributed manner. The logical structures guide routing processes and eliminate [...] Read more.
In Distributed Hash Table (DHT)-based Mobile Ad Hoc Networks (MANETs), a logical structured network (i.e., follows a tree, ring, chord, 3D, etc., structure) is built over the ad hoc physical topology in a distributed manner. The logical structures guide routing processes and eliminate flooding at the control and the data plans, thus making the system scalable. However, limited radio range, mobility, and lack of infrastructure introduce frequent and unpredictable changes to network topology, i.e., connectivity/dis-connectivity, node/link failure, network partition, and frequent merging. Moreover, every single change in the physical topology has an associated impact on the logical structured network and results in unevenly distributed and disrupted logical structures. This completely halts communication in the logical network, even physically connected nodes would not remain reachable due to disrupted logical structure, and unavailability of index information maintained at anchor nodes (ANs) in DHT networks. Therefore, distributed solutions are needed to tolerate faults in the logical network and provide end-to-end connectivity in such an adversarial environment. This paper defines the scope of the problem in the context of DHT networks and contributes a Fault-Tolerant DHT-based routing protocol (FTDN). FTDN, using a cross-layer design approach, investigates network dynamics in the physical network and adaptively makes arrangements to tolerate faults in the logically structured DHT network. In particular, FTDN ensures network availability (i.e., maintains connected and evenly distributed logical structures and ensures access to index information) in the face of failures and significantly improves performance. Analysis and simulation results show the effectiveness of the proposed solutions. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>Address publication, lookup, and routing in DHT networks.</p>
Full article ">Figure 2
<p>The Address tree.</p>
Full article ">Figure 3
<p>k-hop critical node/link scenario.</p>
Full article ">Figure 4
<p>Physical vs. logical network.</p>
Full article ">Figure 5
<p>Lookup success ratio as a function of network size and speed.</p>
Full article ">Figure 6
<p>Lookup success ratio results, as boxplots, against different node moving speeds with varying network sizes.</p>
Full article ">Figure 7
<p>Average E2E lookup delay as a function of network size and speed.</p>
Full article ">Figure 8
<p>E2E delay results, as boxplots, against different node moving speeds with varying network sizes.</p>
Full article ">Figure 9
<p>Normalized overhead as a function of network size and speed.</p>
Full article ">Figure 10
<p>Normalized overhead results, as boxplots, against different node moving speeds with varying network sizes.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop