Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 19, January-2
Previous Issue
Volume 18, December
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 19, Issue 1 (January-1 2019) – 217 articles

Cover Story (view full-size image): An electrical penetration assembly (EPA), which provides a pressure barrier for the containment structure in nuclear power plants, is a mismatch metal-to-glass sealing structure. An important factor for maintaining hermeticity of electrical penetration assembly is the residual stress in the sealing glass, which is generated during the EPA sealing process. A novel method to investigate and optimize the sealing process of EPA, based on a fiber Bragg grating (FBG) sensor, is proposed in this paper. The temperature change during the heating process was measured by Bragg wavelength shift. After the sealing glass solidified and bonded well with FBG, the residual stress was determined and obtained by the sensor. Based on temperature and residual stress measurements, the FBG was found to be feasible for application in the manufacturing process of equipment used in nuclear power plants. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
17 pages, 2970 KiB  
Article
Development of a LeNet-5 Gas Identification CNN Structure for Electronic Noses
by Guangfen Wei, Gang Li, Jie Zhao and Aixiang He
Sensors 2019, 19(1), 217; https://doi.org/10.3390/s19010217 - 8 Jan 2019
Cited by 129 | Viewed by 15667
Abstract
A new LeNet-5 gas identification convolutional neural network structure for electronic noses is proposed and developed in this paper. Inspired by the tremendous achievements made by convolutional neural networks in the field of computer vision, the LeNet-5 was adopted and improved for a [...] Read more.
A new LeNet-5 gas identification convolutional neural network structure for electronic noses is proposed and developed in this paper. Inspired by the tremendous achievements made by convolutional neural networks in the field of computer vision, the LeNet-5 was adopted and improved for a 12-sensor array based electronic nose system. Response data of the electronic nose to different concentrations of CO, CH4 and their mixtures were acquired by an automated gas distribution and test system. By adjusting the parameters of the CNN structure, the gas LeNet-5 was improved to recognize the three categories of CO, CH4 and their mixtures omitting the concentration influences. The final gas identification accuracy rate reached 98.67% with the unused data as test set by the improved gas LeNet-5. Comparison with results of Multiple Layer Perceptron neural networks and Probabilistic Neural Network verifies the improvement of recognition rate while with the same level of time cost, which proved the effectiveness of the proposed approach. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>The frame diagram of EN.</p>
Full article ">Figure 2
<p>(<b>a</b>) The improved system of the EN and the automated test system; (<b>b</b>) The measurement circuit.</p>
Full article ">Figure 3
<p>(<b>a</b>) Response curves of TGS2603 to CO at 50 ppm concentration at different injecting times; (<b>b</b>) Response of 12 sensors to CH<sub>4</sub> at four concentrations; (<b>c</b>) Response of 12 sensors to CO at four concentrations; (<b>d</b>) Response of 12 sensors to gas mixtures (50 ppm CO + 500~2000 ppm CH<sub>4</sub>).</p>
Full article ">Figure 4
<p>The LeNet-5 structure proposed by Yann LeCun [<a href="#B13-sensors-19-00217" class="html-bibr">13</a>].</p>
Full article ">Figure 5
<p>Activated functions.</p>
Full article ">Figure 6
<p>The classification process of softmax.</p>
Full article ">Figure 7
<p>(<b>a</b>) Patterns of CH<sub>4</sub> data matrices with size 12 × 12 (500, 1000, 1500, 2000 pm); (<b>b</b>) Patterns of CO data matrices with size 12 × 12 (50, 100, 150, 200 pm); (<b>c</b>) Patterns of mixture data matrices with size 12 × 12 (500 ppm CH<sub>4</sub> + 50 ppm CO, 500 ppm CH<sub>4</sub> + 100 ppm CO, 500 ppm CH<sub>4</sub> + 150 ppm CO).</p>
Full article ">Figure 8
<p>Training and validation curves of LeNet-5 with different number of convolution kernels.</p>
Full article ">Figure 9
<p>Training and validation curves of LeNet-5 with different sizes of convolutional kernels.</p>
Full article ">Figure 10
<p>Training and validation curves of LeNet-5 with different sizes of inputs.</p>
Full article ">Figure 11
<p>Improved LeNet-5 structure for ENs.</p>
Full article ">Figure 12
<p>The structure of the MLP.</p>
Full article ">
27 pages, 4744 KiB  
Article
Implicit Calibration Using Probable Fixation Targets
by Pawel Kasprowski, Katarzyna Harȩżlak and Przemysław Skurowski
Sensors 2019, 19(1), 216; https://doi.org/10.3390/s19010216 - 8 Jan 2019
Cited by 9 | Viewed by 4915
Abstract
Proper calibration of eye movement signal registered by an eye tracker seems to be one of the main challenges in popularizing eye trackers as yet another user-input device. Classic calibration methods taking time and imposing unnatural behavior on eyes must be replaced by [...] Read more.
Proper calibration of eye movement signal registered by an eye tracker seems to be one of the main challenges in popularizing eye trackers as yet another user-input device. Classic calibration methods taking time and imposing unnatural behavior on eyes must be replaced by intelligent methods that are able to calibrate the signal without conscious cooperation by the user. Such an implicit calibration requires some knowledge about the stimulus a user is looking at and takes into account this information to predict probable gaze targets. This paper describes a possible method to perform implicit calibration: it starts with finding probable fixation targets (PFTs), then it uses these targets to build a mapping-probable gaze path. Various algorithms that may be used for finding PFTs and mappings are presented in the paper and errors are calculated using two datasets registered with two different types of eye trackers. The results show that although for now the implicit calibration provides results worse than the classic one, it may be comparable with it and sufficient for some applications. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>The image with two probable fixation targets (PFTs). It may be assumed that while looking at the image, most of the time people will fixate on one of the regions inside the yellow squares.</p>
Full article ">Figure 2
<p>Data flow schema of the implicit calibration algorithm taking into account PFT.</p>
Full article ">Figure 3
<p>Fixations with targets. The figure shows subsequent scenes/screens with several targets (PFTs) on each. Horizontal axis represents time. The path visualizes one possible mapping.</p>
Full article ">Figure 4
<p>Exemplary image with corresponding saliency maps obtained with different methods. The targets (PFTs) are presented as cyan dots with labels on each image.</p>
Full article ">Figure 5
<p>The participant was looking at <math display="inline"><semantics> <msubsup> <mi>T</mi> <mi>a</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> </semantics></math> during fixation <span class="html-italic">a</span> and the eye center was in the point shown in <math display="inline"><semantics> <mrow> <mi>e</mi> <mi>y</mi> <msub> <mi>e</mi> <mi>a</mi> </msub> </mrow> </semantics></math> image. Having fixation <span class="html-italic">b</span> with the eye center in the point shown in <math display="inline"><semantics> <mrow> <mi>e</mi> <mi>y</mi> <msub> <mi>e</mi> <mi>b</mi> </msub> </mrow> </semantics></math> image it may be assumed that the correct gaze target should be <math display="inline"><semantics> <msubsup> <mi>T</mi> <mi>b</mi> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </msubsup> </semantics></math>, because it results in the lowest angle between <math display="inline"><semantics> <mover accent="true"> <msub> <mi>E</mi> <mrow> <mi>a</mi> <mi>b</mi> </mrow> </msub> <mo stretchy="false">→</mo> </mover> </semantics></math> flipped horizontally (Equation (<a href="#FD12-sensors-19-00216" class="html-disp-formula">12</a>)) and <math display="inline"><semantics> <mover accent="true"> <msub> <mi>T</mi> <mrow> <mi>a</mi> <mi>b</mi> </mrow> </msub> <mo stretchy="false">→</mo> </mover> </semantics></math> (Equation (<a href="#FD13-sensors-19-00216" class="html-disp-formula">13</a>)).</p>
Full article ">Figure 6
<p>Eye trackers used during the experimental part of the research. Left: Pupil Labs headset eye tracker, right: The Eye Tribe remote eye tracker to be mounted below a screen.</p>
Full article ">Figure 7
<p>A pipeline of image processing converting scene and eye frames to a list of fixations with eye and targets coordinates.</p>
Full article ">Figure 8
<p>A frame from the movie presented during the third experiment.</p>
Full article ">Figure 9
<p>Error for the remote eye tracker with <math display="inline"><semantics> <mrow> <mi>M</mi> <mi>C</mi> <msub> <mi>F</mi> <mrow> <mi>D</mi> <mi>I</mi> <mi>R</mi> </mrow> </msub> </mrow> </semantics></math> and incremental algorithm depending on the number of fixations. Left: horizontal error, right: vertical error. The gray area represents standard deviation.</p>
Full article ">Figure 10
<p>Error for the headset eye tracker and <math display="inline"><semantics> <mrow> <mi>M</mi> <mi>C</mi> <msub> <mi>F</mi> <mrow> <mi>D</mi> <mi>I</mi> <mi>R</mi> </mrow> </msub> </mrow> </semantics></math> incremental algorithm depending on the number of fixations. Left: horizontal error, right: vertical error. Gray area represents standard deviation.</p>
Full article ">
20 pages, 11378 KiB  
Article
Integration of a Mobile Node into a Hybrid Wireless Sensor Network for Urban Environments
by Carlos Alberto Socarrás Bertiz, Juan Jesús Fernández Lozano, Jose Antonio Gomez-Ruiz and Alfonso García-Cerezo
Sensors 2019, 19(1), 215; https://doi.org/10.3390/s19010215 - 8 Jan 2019
Cited by 8 | Viewed by 6432
Abstract
Robots, or in general, intelligent vehicles, require large amounts of data to adapt their behavior to the environment and achieve their goals. When their missions take place in large areas, using additional information to that gathered by the onboard sensors frequently offers a [...] Read more.
Robots, or in general, intelligent vehicles, require large amounts of data to adapt their behavior to the environment and achieve their goals. When their missions take place in large areas, using additional information to that gathered by the onboard sensors frequently offers a more efficient solution of the problem. The emergence of Cyber-Physical Systems and Cloud computing allows this approach, but integration of sensory information, and its effective availability for the robots or vehicles is challenging. This paper addresses the development and implementation of a modular mobile node of a Wireless Sensor Network (WSN), designed to be mounted onboard vehicles, and capable of using different sensors according to mission needs. The mobile node is integrated with an existing static network, transforming it into a Hybrid Wireless Sensor Network (H-WSN), and adding flexibility and range to it. The integration is achieved without the need for multi-hop routing. A database holds the data acquired by both mobile and static nodes, allowing access in real-time to the gathered information. A Human–Machine Interface (HMI) presents this information to users. Finally, the system is tested in real urban scenarios in a use-case of measurement of gas levels. Full article
(This article belongs to the Special Issue Design and Implementation of Future CPS)
Show Figures

Figure 1

Figure 1
<p>System architecture of the UIS static Wireless Sensor Network.</p>
Full article ">Figure 2
<p>Different types of nodes developed for the Urban Information System.</p>
Full article ">Figure 3
<p>Architecture of the Mobile node.</p>
Full article ">Figure 4
<p>ASCII frame structure employed in the mobile node.</p>
Full article ">Figure 5
<p>Simplified protocol message flow diagram for the local mode.</p>
Full article ">Figure 6
<p>Part of an enhanced frame.</p>
Full article ">Figure 7
<p>Simplified protocol message flow diagram for the networked mode.</p>
Full article ">Figure 8
<p>Frame in networked mode with gas sensors data: (<b>a</b>) Frame with data from two sensors. (<b>b</b>) Frame with data from three sensors.</p>
Full article ">Figure 9
<p>Integration of the Mobile node to create a Hybrid-Wireless Sensor Network.</p>
Full article ">Figure 10
<p>Human–Machine Interface for the Urban Information System.</p>
Full article ">Figure 11
<p>Vehicle used in experiments showing the top case with the UIS Mobile node installed.</p>
Full article ">Figure 12
<p>Area of the city center covered by experiments in local mode.</p>
Full article ">Figure 13
<p>Area of the city of Malaga covered by experiments in networked mode.</p>
Full article ">Figure 14
<p>Measurements for the NH3 sensor obtained in a networked mode experiment, as shown by the user interface.</p>
Full article ">
14 pages, 3130 KiB  
Article
Decoupling of Airborne Dynamic Bending Deformation Angle and Its Application in the High-Accuracy Transfer Alignment Process
by Ping Yang, Xiyuan Chen and Junwei Wang
Sensors 2019, 19(1), 214; https://doi.org/10.3390/s19010214 - 8 Jan 2019
Cited by 5 | Viewed by 4243
Abstract
In the traditional airborne distributed position and orientation system (DPOS) transfer alignment process, the coupling angle between the dynamic deformation and body angular motion is not estimated or compensated, which causes the process to have low precision and long convergence time. To achieve [...] Read more.
In the traditional airborne distributed position and orientation system (DPOS) transfer alignment process, the coupling angle between the dynamic deformation and body angular motion is not estimated or compensated, which causes the process to have low precision and long convergence time. To achieve high-precision transfer alignment, a decoupling method for the airborne dynamic deformation angle is proposed in this paper. The model of the coupling angle is established through mathematical derivation. Then, taking the coupling angle into consideration, angular velocity error and velocity error between the master INS and slave IMU are corrected. Based on this, a novel 27-state Kalman filter model is established. Simulation results demonstrate that, compared with the traditional transfer alignment model, the model proposed in this paper has faster convergence time and higher accuracy. Full article
(This article belongs to the Special Issue Aerospace Sensors and Multisensor Systems)
Show Figures

Figure 1

Figure 1
<p>Sensor location.</p>
Full article ">Figure 2
<p>The relationship between <math display="inline"><semantics> <mrow> <msubsup> <mi>ω</mi> <mrow> <mi>i</mi> <mi>m</mi> </mrow> <mi>m</mi> </msubsup> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msubsup> <mi>ω</mi> <mrow> <mi>i</mi> <mi>s</mi> </mrow> <mi>s</mi> </msubsup> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>ω</mi> <mi>θ</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msubsup> <mi>ω</mi> <mrow> <mi>i</mi> <mi>s</mi> </mrow> <mrow> <mi>s</mi> <mo>′</mo> </mrow> </msubsup> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>The <math display="inline"><semantics> <mrow> <mi>x</mi> <mtext>-</mtext> <mi>a</mi> <mi>x</mi> <mi>i</mi> <mi>s</mi> </mrow> </semantics></math> coupling error angle.</p>
Full article ">Figure 4
<p>The angular velocity error between <math display="inline"><semantics> <mrow> <msubsup> <mi>ω</mi> <mrow> <mi>i</mi> <mi>m</mi> </mrow> <mi>m</mi> </msubsup> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msubsup> <mi>ω</mi> <mrow> <mi>i</mi> <mi>s</mi> </mrow> <mrow> <mi>s</mi> <mo>′</mo> </mrow> </msubsup> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>The relative positional relationship between <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mi>m</mi> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mi>m</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mi>r</mi> </semantics></math>.</p>
Full article ">Figure 6
<p>The flight trajectory.</p>
Full article ">Figure 7
<p>The attitude of master INS.</p>
Full article ">Figure 8
<p>The estimation errors of the coupling angles.</p>
Full article ">Figure 9
<p>The estimation errors of the coupling angles. (<b>a</b>) The <math display="inline"><semantics> <mrow> <mi>x</mi> <mtext>-</mtext> <mi>a</mi> <mi>x</mi> <mi>i</mi> <mi>s</mi> </mrow> </semantics></math> dynamic lever arm error; (<b>b</b>) The <math display="inline"><semantics> <mrow> <mi>y</mi> <mtext>-</mtext> <mi>a</mi> <mi>x</mi> <mi>i</mi> <mi>s</mi> </mrow> </semantics></math> dynamic lever arm error; (<b>c</b>) The <math display="inline"><semantics> <mrow> <mi>z</mi> <mtext>-</mtext> <mi>a</mi> <mi>x</mi> <mi>i</mi> <mi>s</mi> </mrow> </semantics></math> dynamic lever arm error.</p>
Full article ">Figure 9 Cont.
<p>The estimation errors of the coupling angles. (<b>a</b>) The <math display="inline"><semantics> <mrow> <mi>x</mi> <mtext>-</mtext> <mi>a</mi> <mi>x</mi> <mi>i</mi> <mi>s</mi> </mrow> </semantics></math> dynamic lever arm error; (<b>b</b>) The <math display="inline"><semantics> <mrow> <mi>y</mi> <mtext>-</mtext> <mi>a</mi> <mi>x</mi> <mi>i</mi> <mi>s</mi> </mrow> </semantics></math> dynamic lever arm error; (<b>c</b>) The <math display="inline"><semantics> <mrow> <mi>z</mi> <mtext>-</mtext> <mi>a</mi> <mi>x</mi> <mi>i</mi> <mi>s</mi> </mrow> </semantics></math> dynamic lever arm error.</p>
Full article ">Figure 10
<p>The estimation errors of attitude. (<b>a</b>) The <math display="inline"><semantics> <mrow> <mi>x</mi> <mtext>-</mtext> <mi>a</mi> <mi>x</mi> <mi>i</mi> <mi>s</mi> </mrow> </semantics></math> attitude error; (<b>b</b>) The <math display="inline"><semantics> <mrow> <mi>y</mi> <mtext>-</mtext> <mi>a</mi> <mi>x</mi> <mi>i</mi> <mi>s</mi> </mrow> </semantics></math> attitude error; (<b>c</b>) The <math display="inline"><semantics> <mrow> <mi>z</mi> <mtext>-</mtext> <mi>a</mi> <mi>x</mi> <mi>i</mi> <mi>s</mi> </mrow> </semantics></math> attitude error.</p>
Full article ">
17 pages, 6554 KiB  
Article
Extended Multiple Aperture Mapdrift-Based Doppler Parameter Estimation and Compensation for Very-High-Squint Airborne SAR Imaging
by Zhichao Zhou, Yinghe Li, Yan Wang, Linghao Li and Tao Zeng
Sensors 2019, 19(1), 213; https://doi.org/10.3390/s19010213 - 8 Jan 2019
Cited by 2 | Viewed by 4263
Abstract
Doppler parameter estimation and compensation (DPEC) is an important technique for airborne SAR imaging due to the unpredictable disturbance of real aircraft trajectory. Traditional DPEC methods can be only applied for broadside, small- or medium-squint geometries, as they at most consider the spatial [...] Read more.
Doppler parameter estimation and compensation (DPEC) is an important technique for airborne SAR imaging due to the unpredictable disturbance of real aircraft trajectory. Traditional DPEC methods can be only applied for broadside, small- or medium-squint geometries, as they at most consider the spatial variance of the second-order Doppler phase. To implement the DPEC in very-high-squint geometries, we propose an extended multiple aperture mapdrift (EMAM) method in this paper for better accuracy. This advantage is achieved by further estimating and compensating the spatial variation of the third-order Doppler phase, i.e., the derivative of the Doppler rate. The main procedures of the EMAM, including the steps of sub-view image generation, sliding-window-based cross-correlation, and image-offset-based Doppler parameter estimation, are derived in detail, followed by the analyses for the EMAM performance. The presented approach is evaluated by both computer simulations and real airborne data. Full article
(This article belongs to the Special Issue Synthetic Aperture Radar (SAR) Techniques and Applications)
Show Figures

Figure 1

Figure 1
<p>The illumination of the spatially-variant Doppler histories of targets at different positions for the very-high-squint (VHS) airborne SAR imaging. <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi>s</mi> </msub> </mrow> </semantics></math> is the aircraft motion time. <math display="inline"><semantics> <mrow> <msub> <mi>t</mi> <mrow> <mi>s</mi> <mi>t</mi> <mi>a</mi> <mi>r</mi> <mi>t</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>t</mi> <mrow> <mi>e</mi> <mi>n</mi> <mi>d</mi> </mrow> </msub> </mrow> </semantics></math> are the time start and end, respectively. <math display="inline"><semantics> <mrow> <msub> <mi>B</mi> <mi>a</mi> </msub> </mrow> </semantics></math> is the Doppler bandwidth. <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mrow> <mi>d</mi> <mi>c</mi> </mrow> </msub> </mrow> </semantics></math> is the Doppler centroid.</p>
Full article ">Figure 2
<p>The illuminations of the basic MAM method and the EMAM method. (<b>a</b>) The basic MAM method; (<b>b</b>) the EMAM method. <math display="inline"><semantics> <mrow> <mo>Δ</mo> <msub> <mi>f</mi> <mrow> <mi>d</mi> <mo>,</mo> <mi>i</mi> <mi>j</mi> </mrow> </msub> </mrow> </semantics></math> is the position offset between the sub-<math display="inline"><semantics> <mi>i</mi> </semantics></math> image and sub-<math display="inline"><semantics> <mi>j</mi> </semantics></math> image.</p>
Full article ">Figure 3
<p>The flow charts of the basic MAM method and the EMAM method. (<b>a</b>) The basic MAM method; (<b>b</b>) the EMAM method.</p>
Full article ">Figure 4
<p>The flowchart of the azimuth compression combined with the EMAM method in the high-squint airborne SAR imaging algorithm.</p>
Full article ">Figure 5
<p>Typical VHS airborne SAR geometry.</p>
Full article ">Figure 6
<p>The spatial variance of the Doppler parameters. (<b>a</b>) The Doppler rate; (<b>b</b>) the derivative of the Doppler rate.</p>
Full article ">Figure 7
<p>The spatially-variant phase errors caused by the spatially-variant Doppler parameters. (<b>a</b>) The phase error caused by the Doppler rate; (<b>b</b>) the phase error caused by the derivative of the Doppler rate.</p>
Full article ">Figure 8
<p>The computational complexity of the different methods.</p>
Full article ">Figure 9
<p>The two-dimensional imaging results of targets by the different methods with the velocity and acceleration errors. (<b>a</b>) No Doppler parameter estimation; (<b>b</b>) the basic MAM method; (<b>c</b>) the IMAM method; (<b>d</b>) the EMAM method.</p>
Full article ">Figure 9 Cont.
<p>The two-dimensional imaging results of targets by the different methods with the velocity and acceleration errors. (<b>a</b>) No Doppler parameter estimation; (<b>b</b>) the basic MAM method; (<b>c</b>) the IMAM method; (<b>d</b>) the EMAM method.</p>
Full article ">Figure 10
<p>The azimuth impulse responses of targets by the different methods with the velocity and acceleration errors. (<b>a</b>) No Doppler parameter estimation; (<b>b</b>) the basic MAM method; (<b>c</b>) the IMAM method; (<b>d</b>) the EMAM method.</p>
Full article ">Figure 10 Cont.
<p>The azimuth impulse responses of targets by the different methods with the velocity and acceleration errors. (<b>a</b>) No Doppler parameter estimation; (<b>b</b>) the basic MAM method; (<b>c</b>) the IMAM method; (<b>d</b>) the EMAM method.</p>
Full article ">Figure 11
<p>The images of real airborne data based on the different methods. (<b>a</b>) The inertial navigation information; (<b>b</b>) the basic MAM method; (<b>c</b>) the IMAM method; (<b>d</b>) the EMAM method.</p>
Full article ">Figure 11 Cont.
<p>The images of real airborne data based on the different methods. (<b>a</b>) The inertial navigation information; (<b>b</b>) the basic MAM method; (<b>c</b>) the IMAM method; (<b>d</b>) the EMAM method.</p>
Full article ">Figure 12
<p>The azimuth impulse responses of the chosen strong scatterer based on the IMAM method and the EMAM method.</p>
Full article ">Figure 13
<p>The estimation curves of the spatially-variant Doppler parameters based on the different methods. (<b>a</b>) The Doppler rate; (<b>b</b>) the derivative of the Doppler rate.</p>
Full article ">Figure 14
<p>The residual spatial variance of the Doppler parameters after compensation based on the estimation results of the EMAM method.</p>
Full article ">
93 pages, 526 KiB  
Editorial
Acknowledgement to Reviewers of Sensors in 2018
by Sensors Editorial Office
Sensors 2019, 19(1), 212; https://doi.org/10.3390/s19010212 - 8 Jan 2019
Viewed by 13102
Abstract
Rigorous peer-review is the corner-stone of high-quality academic publishing[...] Full article
14 pages, 4648 KiB  
Article
Synthesis of Cu2O/CuO Nanocrystals and Their Application to H2S Sensing
by Kazuki Mikami, Yuta Kido, Yuji Akaishi, Armando Quitain and Tetsuya Kida
Sensors 2019, 19(1), 211; https://doi.org/10.3390/s19010211 - 8 Jan 2019
Cited by 68 | Viewed by 10277
Abstract
Semiconducting metal oxide nanocrystals are an important class of materials that have versatile applications because of their useful properties and high stability. Here, we developed a simple route to synthesize nanocrystals (NCs) of copper oxides such as Cu2O and CuO using [...] Read more.
Semiconducting metal oxide nanocrystals are an important class of materials that have versatile applications because of their useful properties and high stability. Here, we developed a simple route to synthesize nanocrystals (NCs) of copper oxides such as Cu2O and CuO using a hot-soap method, and applied them to H2S sensing. Cu2O NCs were synthesized by simply heating a copper precursor in oleylamine in the presence of diol at 160 °C under an Ar flow. X-ray diffractometry (XRD), dynamic light scattering (DLS), and transmission electron microscopy (TEM) results indicated the formation of monodispersed Cu2O NCs having approximately 5 nm in crystallite size and 12 nm in colloidal size. The conversion of the Cu2O NCs to CuO NCs was undertaken by straightforward air oxidation at room temperature, as confirmed by XRD and UV-vis analyses. A thin film Cu2O NC sensor fabricated by spin coating showed responses to H2S in dilute concentrations (1–8 ppm) at 50–150 °C, but the stability was poor because of the formation of metallic Cu2S in a H2S atmosphere. We found that Pd loading improved the stability of the sensor response. The Pd-loaded Cu2O NC sensor exhibited reproducible responses to H2S at 200 °C. Based on the gas sensing mechanism, it is suggested that Pd loading facilitates the reaction of adsorbed oxygen with H2S and suppresses the irreversible formation of Cu2S. Full article
(This article belongs to the Special Issue Functional Materials for the Applications of Advanced Gas Sensors)
Show Figures

Figure 1

Figure 1
<p>A photo of the sensor device using Cu<sub>2</sub>O NCs deposited on an alumina substrate with Au microelectrodes.</p>
Full article ">Figure 2
<p>XRD patterns of (<b>a</b>) Cu<sub>2</sub>O (JCPDS file no. 05-0667) and (<b>b</b>) CuO NCs (JCPDS file no. 45-0937). CuO NCs were obtained by air oxidation of Cu<sub>2</sub>O NCs at room temperature.</p>
Full article ">Figure 3
<p>Particle size distributions dispersed in toluene and TEM images of (<b>a</b>,<b>c</b>) Cu<sub>2</sub>O and (<b>b</b>,<b>d</b>) CuO NCs.</p>
Full article ">Figure 4
<p>UV-vis absorption spectra of (<b>a</b>) Cu<sub>2</sub>O NCs and (<b>b</b>) CuO NCs in toluene. Insets show the corresponding Tauc plots.</p>
Full article ">Figure 5
<p>(<b>a</b>) XRD pattern, (<b>b</b>) particle size distribution, and (<b>c</b>) TEM image of Pd NCs.</p>
Full article ">Figure 6
<p>XRD patterns of Cu<sub>2</sub>O NCs heated at (<b>a</b>) 50, (<b>b</b>) 100, (<b>c</b>) 150, (<b>d</b>) 200, (<b>e</b>) 250, and (<b>f</b>) 300 °C for 30 min in air. Cu<sub>2</sub>O: JCPDS file no. 05-0667, CuO: JCPDS file no. 45-0937.</p>
Full article ">Figure 7
<p>FT-IR spectra of (<b>a</b>) Cu<sub>2</sub>O and (<b>b</b>) CuO NCs heated at 200 and 250 °C, respectively, in air for different times.</p>
Full article ">Figure 8
<p>Response transients to 5 ppm H<sub>2</sub>S of the Cu<sub>2</sub>O NC sensor at different temperatures.</p>
Full article ">Figure 9
<p>Response transients to 8 ppm H<sub>2</sub>S in air for Pd-loaded (1, 5, 10 mol%) Cu<sub>2</sub>O NC sensors at different temperatures. (<b>a</b>) 50, (<b>b</b>) 100, (<b>c</b>) 150, (<b>d</b>) 200 °C.</p>
Full article ">Figure 10
<p>Dependence of sensor response to 8 ppm H<sub>2</sub>S in air on Pd loading amount for the Pd-loaded Cu<sub>2</sub>O NC sensors operated at different temperatures.</p>
Full article ">Figure 11
<p>Response transients of the Pd (1, 5, 10 mol%)-loaded CuO NC sensor in response to 1–8 ppm H<sub>2</sub>S at 250 °C.</p>
Full article ">Figure 12
<p>Dependence of sensor response on H<sub>2</sub>S concentration for the Pd (1, 5, 10 mol%)-loaded CuO NC sensors operated at 250 °C.</p>
Full article ">
17 pages, 2931 KiB  
Article
Validating Deep Neural Networks for Online Decoding of Motor Imagery Movements from EEG Signals
by Zied Tayeb, Juri Fedjaev, Nejla Ghaboosi, Christoph Richter, Lukas Everding, Xingwei Qu, Yingyu Wu, Gordon Cheng and Jörg Conradt
Sensors 2019, 19(1), 210; https://doi.org/10.3390/s19010210 - 8 Jan 2019
Cited by 152 | Viewed by 16334
Abstract
Non-invasive, electroencephalography (EEG)-based brain-computer interfaces (BCIs) on motor imagery movements translate the subject’s motor intention into control signals through classifying the EEG patterns caused by different imagination tasks, e.g., hand movements. This type of BCI has been widely studied and used as an [...] Read more.
Non-invasive, electroencephalography (EEG)-based brain-computer interfaces (BCIs) on motor imagery movements translate the subject’s motor intention into control signals through classifying the EEG patterns caused by different imagination tasks, e.g., hand movements. This type of BCI has been widely studied and used as an alternative mode of communication and environmental control for disabled patients, such as those suffering from a brainstem stroke or a spinal cord injury (SCI). Notwithstanding the success of traditional machine learning methods in classifying EEG signals, these methods still rely on hand-crafted features. The extraction of such features is a difficult task due to the high non-stationarity of EEG signals, which is a major cause by the stagnating progress in classification performance. Remarkable advances in deep learning methods allow end-to-end learning without any feature engineering, which could benefit BCI motor imagery applications. We developed three deep learning models: (1) A long short-term memory (LSTM); (2) a spectrogram-based convolutional neural network model (CNN); and (3) a recurrent convolutional neural network (RCNN), for decoding motor imagery movements directly from raw EEG signals without (any manual) feature engineering. Results were evaluated on our own publicly available, EEG data collected from 20 subjects and on an existing dataset known as 2b EEG dataset from “BCI Competition IV”. Overall, better classification performance was achieved with deep learning models compared to state-of-the art machine learning techniques, which could chart a route ahead for developing new robust techniques for EEG signal decoding. We underpin this point by demonstrating the successful real-time control of a robotic arm using our CNN based BCI. Full article
Show Figures

Figure 1

Figure 1
<p>Experimental setup and an example of a recording session of motor imagery-electroencephalography (MI-EEG) recording.</p>
Full article ">Figure 2
<p>Example of the generated spectrograms from C3 and C4 electrode during left (class 1) and right (class 2) hand movements imagination.</p>
Full article ">Figure 3
<p>The pragmatic conventional neural network (pCNN) model’s architecture, where <span class="html-italic">E</span> is the number of electrodes, <span class="html-italic">T</span> is the number of timesteps, and <span class="html-italic">K</span> is the number of classes.</p>
Full article ">Figure 4
<p>Training and validation loss of the pCNN model. The blue and green lines represent the average of the 5 folds for training and validation, respectively.</p>
Full article ">Figure 5
<p>MI classification accuracies from 20 subjects using (<b>a</b>) traditional machine learning approaches and (<b>b</b>) different neural classifiers. The polar bar plot shows the accuracy range (mean ± standard deviation) achieved by the 5 models for each of the 20 subjects. The lower panel subsumes for each algorithm the 20 mean accuracies achieved, black bars indicate the median result.</p>
Full article ">Figure 6
<p>MI classification accuracies from nine subjects using five different classifiers. The polar bar plot shows the accuracy range (mean ± standard deviation) achieved by the five models for each of the nine subjects. The lower panel subsumes for each algorithm the nine mean accuracies achieved, black bars indicate the median result.</p>
Full article ">Figure 7
<p>A frame of a live stream. <b>Top</b>: Filtered signal during a trial. Blue and red traces illustrate channel 1 and channel 2, respectively. Vertical lines indicate visual (orange) and acoustic cues (red). <b>Bottom</b>: Generated spectrograms from data within the grey rectangle shown above.</p>
Full article ">Figure 8
<p>Live setup for real-time EEG signal decoding and Katana robot arm control. P(L) and P(R) represent the probability of left and right hand movements, respectively.</p>
Full article ">
20 pages, 12173 KiB  
Article
City Scale Particulate Matter Monitoring Using LoRaWAN Based Air Quality IoT Devices
by Steven J. Johnston, Philip J. Basford, Florentin M. J. Bulot, Mihaela Apetroaie-Cristea, Natasha H. C. Easton, Charlie Davenport, Gavin L. Foster, Matthew Loxham, Andrew K. R. Morris and Simon J. Cox
Sensors 2019, 19(1), 209; https://doi.org/10.3390/s19010209 - 8 Jan 2019
Cited by 93 | Viewed by 15448
Abstract
Air Quality (AQ) is a very topical issue for many cities and has a direct impact on citizen health. The AQ of a large UK city is being investigated using low-cost Particulate Matter (PM) sensors, and the results obtained by these sensors have [...] Read more.
Air Quality (AQ) is a very topical issue for many cities and has a direct impact on citizen health. The AQ of a large UK city is being investigated using low-cost Particulate Matter (PM) sensors, and the results obtained by these sensors have been compared with government operated AQ stations. In the first pilot deployment, six AQ Internet of Things (IoT) devices have been designed and built, each with four different low-cost PM sensors, and they have been deployed at two locations within the city. These devices are equipped with LoRaWAN wireless network transceivers to test city scale Low-Power Wide Area Network (LPWAN) coverage. The study concludes that (i) the physical device developed can operate at a city scale; (ii) some low-cost PM sensors are viable for monitoring AQ and for detecting PM trends; (iii) LoRaWAN is suitable for city scale sensor coverage where connectivity is an issue. Based on the findings from this first pilot project, a larger LoRaWAN enabled AQ sensor network is being deployed across the city of Southampton in the UK. Full article
Show Figures

Figure 1

Figure 1
<p>Particulate Matter (PM) sensors: Alphasense OPC-N2 [<a href="#B24-sensors-19-00209" class="html-bibr">24</a>], Plantower PMS5003 [<a href="#B25-sensors-19-00209" class="html-bibr">25</a>], Plantower PMS7003 [<a href="#B26-sensors-19-00209" class="html-bibr">26</a>], Honeywell HPMA115S0 [<a href="#B27-sensors-19-00209" class="html-bibr">27</a>].</p>
Full article ">Figure 2
<p>Version 1 of the Air Quality (AQ) Internet of Things (IoT) device, without the enclosure lid; installed on an external wall.</p>
Full article ">Figure 3
<p>A map showing the six deployed Air Quality (AQ) Internet of Things (IoT) devices at School <tt>A</tt> &amp; School <tt>B</tt>; the eight LoRaWAN v1 &amp; v2 base stations; and the GPS confirmed coverage across Southampton, UK [<a href="#B43-sensors-19-00209" class="html-bibr">43</a>,<a href="#B44-sensors-19-00209" class="html-bibr">44</a>].</p>
Full article ">Figure 4
<p>Time series comparing the PM<sub>2.5</sub> concentrations reported by the “Southampton Centre” Automatic Urban and Rural Network (AURN) station [<a href="#B49-sensors-19-00209" class="html-bibr">49</a>] and the mean value of the sensors of one Air Quality (AQ) Internet of Things (IoT) device at School <tt>A</tt> and one device at School <tt>B</tt>, between 1 June and 14 June 2018.</p>
Full article ">Figure 5
<p>The major changes in the evolution of the Air Quality (AQ) Internet of Things (IoT) interior. Five of twenty-one versions are shown. The final fully populated, acrylic version is shown in <a href="#sensors-19-00209-f006" class="html-fig">Figure 6</a>.</p>
Full article ">Figure 6
<p>Version 2 of the Air Quality (AQ) Internet of Things (IoT) device in a landscape orientation with the enclosure lid removed. The acrylic framework comprises of (i) the electronics section containing a Raspberry Pi 3, Dragino LoRaWAN Hardware Attached on Top (HAT) and power distribution hardware; (ii) the sensor housing containing five Particulate Matter (PM) sensors, one temperature and humidity sensor and; (iii) the air intake and exhaust, separated by the two vertical acrylic partitions.</p>
Full article ">
27 pages, 6327 KiB  
Article
A Tangible Solution for Hand Motion Tracking in Clinical Applications
by Christina Salchow-Hömmen, Leonie Callies, Daniel Laidig, Markus Valtin, Thomas Schauer and Thomas Seel
Sensors 2019, 19(1), 208; https://doi.org/10.3390/s19010208 - 8 Jan 2019
Cited by 46 | Viewed by 15944
Abstract
Objective real-time assessment of hand motion is crucial in many clinical applications including technically-assisted physical rehabilitation of the upper extremity. We propose an inertial-sensor-based hand motion tracking system and a set of dual-quaternion-based methods for estimation of finger segment orientations and fingertip positions. [...] Read more.
Objective real-time assessment of hand motion is crucial in many clinical applications including technically-assisted physical rehabilitation of the upper extremity. We propose an inertial-sensor-based hand motion tracking system and a set of dual-quaternion-based methods for estimation of finger segment orientations and fingertip positions. The proposed system addresses the specific requirements of clinical applications in two ways: (1) In contrast to glove-based approaches, the proposed solution maintains the sense of touch. (2) In contrast to previous work, the proposed methods avoid the use of complex calibration procedures, which means that they are suitable for patients with severe motor impairment of the hand. To overcome the limited significance of validation in lab environments with homogeneous magnetic fields, we validate the proposed system using functional hand motions in the presence of severe magnetic disturbances as they appear in realistic clinical settings. We show that standard sensor fusion methods that rely on magnetometer readings may perform well in perfect laboratory environments but can lead to more than 15 cm root-mean-square error for the fingertip distances in realistic environments, while our advanced method yields root-mean-square errors below 2 cm for all performed motions. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>IMU-based modular hand sensor system for real-time motion tracking of the fingertip positions. The system consists of a base unit on the hand back, a wireless IMU on the forearm, and up to five sensor strips, each equipped with three IMUs.</p>
Full article ">Figure 2
<p>3D real-time visualization of the measured hand posture.</p>
Full article ">Figure 3
<p>Modeled bones and joints of the human hand. The fingers are numbered from F1 (thumb) to F5 (little finger). The modeled joints are illustrated as colored circles with numbers indicating the considered degrees of freedom of the joint. The metacarpals (light gray) of F2–F5 form the palm, which is treated as flat and rigid. The white dotted line from the wrist to the MCP joint of the middle finger marks the length of the palm <math display="inline"><semantics> <msub> <mi>l</mi> <mi mathvariant="normal">h</mi> </msub> </semantics></math>, which needs to be measured for the model. The remaining joint center positions of MCP and T-CMC are deviated via constant ratios and angles, as exemplarily shown for the thumb with angle <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> </mrow> </semantics></math>58°.</p>
Full article ">Figure 4
<p>Definition of hand and finger movements, illustrated for the left hand. Ulnar and radial deviation are also known as ulnar/radial abduction or wrist adduction. Finger abduction in the MCP joints describes the movement away from the center of the extremity, whereas finger adduction refers to the movement towards the center of the extremity.</p>
Full article ">Figure 5
<p>Top: model terms for the exemplary finger F3 of a left hand and location of coordinate frames: IMU frames as well as bone frames (both no index) located in the centers of rotation and the inertial global coordinate frame (index G). The IMU and bone frames are assumed to differ only by a constant translational offset. <math display="inline"><semantics> <msub> <mi>l</mi> <mi mathvariant="normal">p</mi> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>l</mi> <mi mathvariant="normal">m</mi> </msub> </semantics></math>, and <math display="inline"><semantics> <msub> <mi>l</mi> <mi mathvariant="normal">d</mi> </msub> </semantics></math> denote the functional lengths of the proximal, middle, and distal phalanges. The point <math display="inline"><semantics> <mi mathvariant="bold">p</mi> </semantics></math> marks the fingertip position of interest. Bottom: wrist coordinate system (WCS) located in the center <math display="inline"><semantics> <mi mathvariant="bold">c</mi> </semantics></math> of rotation of the wrist for the left hand.</p>
Full article ">Figure 6
<p>Summary of three proposed fingertip position estimation methods: the baseline method (B), and two advanced methods (M1 and M2). M1 and M2 exploit joint constraints to compensate errors in the attachment and the orientation estimation. In contrast to M1, M2 is completely magnetometer-free and thus suitable for environments with severely disturbed magnetic fields.</p>
Full article ">Figure 7
<p>Measurement setup and positioning of the optical markers. The wooden fixture assures repeatability of the experiments and unrestricted finger motion without any translation of the forearm. The markers on top of the MCP joints are used for visualization purposes, and those on top of the fixture are used for coordinate transformation between the inertial system and the optical system.</p>
Full article ">Figure 8
<p>Position of the optical markers visualized as cyan circles, and definition of coordinate systems illustrated in (<b>a</b>) top view of wrist and hand, and (<b>b</b>) side view of a cut through the middle finger F3 (opt: optical system, WCS: wrist coordinate system, tilde: marker coordinate system).</p>
Full article ">Figure 9
<p>Measurement setup for Setting 2 in the direct presence of ferromagnetic materials and electronic devices.</p>
Full article ">Figure 10
<p>Fingertip in top view (<b>left</b>) and side view (<b>right</b>). Marked are the points of contact on the left and right (<math display="inline"><semantics> <mrow> <msub> <mi>tip</mi> <mi mathvariant="normal">l</mi> </msub> <mo>,</mo> <msub> <mi>tip</mi> <mi mathvariant="normal">r</mi> </msub> </mrow> </semantics></math>), on the top and bottom (<math display="inline"><semantics> <mrow> <msub> <mi>tip</mi> <mi mathvariant="normal">t</mi> </msub> <mo>,</mo> <msub> <mi>tip</mi> <mi mathvariant="normal">b</mi> </msub> </mrow> </semantics></math>) and at the distal tip (<math display="inline"><semantics> <msub> <mi>tip</mi> <mi mathvariant="normal">d</mi> </msub> </semantics></math>) of the finger. The parameter <span class="html-italic">w</span> is the width, <span class="html-italic">t</span> the thickness of the finger.</p>
Full article ">Figure 11
<p>Experimental setup for the evaluation under realistic conditions. The images display the hand poses with the spacer and wooden block.</p>
Full article ">Figure 12
<p>Median, 25th and 75th percentile and maximum values of the error <span class="html-italic">E</span> between the optical motion capture measurement and the hand sensor system with method B (red) and M1 and M2 (blue) for all conducted experiments with the volunteer in Setting 1. For a better overview, methods M1 and M2 were summarized, whereby the respective higher value was illustrated. Box-plots were calculated over time intervals of 30 seconds with at least 10 repetitions for each movement. Abbreviations: A: abduction, F: flexion, AF: abduction and flexion motion, F1: thumb, F2: index finger, F3: middle finger.</p>
Full article ">Figure 13
<p>Top: Time series of x-, y-, and z-component of the fingertip position <math display="inline"><semantics> <mi mathvariant="bold">p</mi> </semantics></math> in the WCS for the combined abduction and flexion motion of the index finger (AF-F2). The solid lines are the calculated position of the hand sensor system with method M1, the dashed lines depict the optical data. Bottom: time series of the tracking error for all three motions of the index finger F2 between the hand sensor system with method M1 and optical system. For better illustration, the signals were low-pass filtered with a cutoff frequency of 2 Hz. The error is always below the critical value of 2 cm. Abbreviations: A: abduction, F: flexion, AF: abduction and flexion motion, F2: index finger (cf. <a href="#sensors-19-00208-t004" class="html-table">Table 4</a>).</p>
Full article ">Figure 14
<p>Average RMSE and standard deviations over all four participants evaluated in Setting 2 for each method. Please refer to <a href="#sensors-19-00208-t005" class="html-table">Table 5</a> for a description of the experiments. Method M2 always yielded the smallest RMSE with comparatively minor differences between the participants. Abbreviations: P1–P4: experiments 1–4, F1: thumb, F2: index finger, F3: middle finger.</p>
Full article ">Figure 15
<p>Representative time series of a pinch motion (experiment P3). The measured distance between <math display="inline"><semantics> <msub> <mi>tip</mi> <mi>d</mi> </msub> </semantics></math> of F1 and F2 (solid green line) is close to the true value (0 cm; dashed green line). Note that the pinch was released during the gray marked time period.</p>
Full article ">
14 pages, 4325 KiB  
Article
Low-Dose Computed Tomography Image Super-Resolution Reconstruction via Random Forests
by Peijian Gu, Changhui Jiang, Min Ji, Qiyang Zhang, Yongshuai Ge, Dong Liang, Xin Liu, Yongfeng Yang, Hairong Zheng and Zhanli Hu
Sensors 2019, 19(1), 207; https://doi.org/10.3390/s19010207 - 8 Jan 2019
Cited by 14 | Viewed by 5092
Abstract
Aiming at reducing computed tomography (CT) scan radiation while ensuring CT image quality, a new low-dose CT super-resolution reconstruction method based on combining a random forest with coupled dictionary learning is proposed. The random forest classifier finds the optimal solution of the mapping [...] Read more.
Aiming at reducing computed tomography (CT) scan radiation while ensuring CT image quality, a new low-dose CT super-resolution reconstruction method based on combining a random forest with coupled dictionary learning is proposed. The random forest classifier finds the optimal solution of the mapping relationship between low-dose CT (LDCT) images and high-dose CT (HDCT) images and then completes CT image reconstruction by coupled dictionary learning. An iterative method is developed to improve robustness, the important coefficients for the tree structure are discussed and the optimal solutions are reported. The proposed method is further compared with a traditional interpolation method. The results show that the proposed algorithm can obtain a higher peak signal-to-noise ratio (PSNR) and structural similarity index measurement (SSIM) and has better ability to reduce noise and artifacts. This method can be applied to many different medical imaging fields in the future and the addition of computer multithreaded computing can reduce time consumption. Full article
(This article belongs to the Special Issue Sensors Signal Processing and Visual Computing)
Show Figures

Figure 1

Figure 1
<p>Flowchart of the SR algorithm.</p>
Full article ">Figure 2
<p>From left to right, top to bottom: (<b>a</b>) HDCT image; (<b>b</b>) LDCT image; (<b>c</b>) reconstructed image obtained using the bicubic interpolation method; (<b>d</b>) image reconstructed by the proposed method.</p>
Full article ">Figure 3
<p>Profiles of different results are shown for the 320th row of the image in <a href="#sensors-19-00207-f002" class="html-fig">Figure 2</a>. The black curve represents the profile of the original CT image in <a href="#sensors-19-00207-f002" class="html-fig">Figure 2</a>a. The red curve represents the profile of the reconstructed CT image obtained using the bicubic interpolation method in <a href="#sensors-19-00207-f002" class="html-fig">Figure 2</a>c. The blue curve represents the profile of the reconstructed CT image obtained using the proposed method in <a href="#sensors-19-00207-f002" class="html-fig">Figure 2</a>d.</p>
Full article ">Figure 4
<p>From left to right, (<b>a</b>–<b>c</b>) respectively represent the residual image of the LDCT image in <a href="#sensors-19-00207-f002" class="html-fig">Figure 2</a>b, the reconstructed results by the bicubic interpolation in <a href="#sensors-19-00207-f002" class="html-fig">Figure 2</a>c and the method proposed in this paper in <a href="#sensors-19-00207-f002" class="html-fig">Figure 2</a>d.</p>
Full article ">Figure 5
<p>(<b>a</b>) HDCT image; (<b>b</b>) LDCT image; (<b>c</b>) Reconstructed image obtained using the bicubic interpolation method; (<b>d</b>) The image reconstructed by the method of this paper with 1 iteration; (<b>e</b>) The image reconstructed by the method of this paper with 2 iterations; (<b>f</b>) The image reconstructed by the method of this paper with 5 iterations.</p>
Full article ">Figure 6
<p>Images (<b>a</b>–<b>f</b>) show zoomed images of the portions marked with red squares in <a href="#sensors-19-00207-f005" class="html-fig">Figure 5</a>a, providing more detail of the differences in reconstructed image quality under different iterations.</p>
Full article ">Figure 7
<p>Changes in PSNR and SSIM values with the number of iterations for the simulation experiment using the proposed method.</p>
Full article ">Figure 8
<p>As shown in (<b>a</b>), when <math display="inline"><semantics> <mrow> <mi>T</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>, the PSNR is close to saturated; in (<b>b</b>) the time increases linearly as <math display="inline"><semantics> <mi>T</mi> </semantics></math> increases.</p>
Full article ">Figure 9
<p>(<b>a</b>) shows that when <math display="inline"><semantics> <mrow> <msub> <mi>ξ</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> <mo>=</mo> <mn>15</mn> </mrow> </semantics></math>, the result is saturated; (<b>b</b>) shows the relationship between the maximum depth of the tree <math display="inline"><semantics> <mrow> <msub> <mi>ξ</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> </mrow> </semantics></math> and the training time.</p>
Full article ">Figure 10
<p>(<b>a</b>) The effect of the regularization parameter <math display="inline"><semantics> <mi>η</mi> </semantics></math> on the results; (<b>b</b>) the effect of the regularization parameter <math display="inline"><semantics> <mi>k</mi> </semantics></math> on the results.</p>
Full article ">
21 pages, 19938 KiB  
Article
A Multiscale Denoising Framework Using Detection Theory with Application to Images from CMOS/CCD Sensors
by Khuram Naveed, Shoaib Ehsan, Klaus D. McDonald-Maier and Naveed Ur Rehman
Sensors 2019, 19(1), 206; https://doi.org/10.3390/s19010206 - 8 Jan 2019
Cited by 15 | Viewed by 5833
Abstract
Output from imaging sensors based on CMOS and CCD devices is prone to noise due to inherent electronic fluctuations and low photon count. The resulting noise in the acquired image could be effectively modelled as signal-dependent Poisson noise or as a mixture of [...] Read more.
Output from imaging sensors based on CMOS and CCD devices is prone to noise due to inherent electronic fluctuations and low photon count. The resulting noise in the acquired image could be effectively modelled as signal-dependent Poisson noise or as a mixture of Poisson and Gaussian noise. To that end, we propose a generalized framework based on detection theory and hypothesis testing coupled with the variance stability transformation (VST) for Poisson or Poisson–Gaussian denoising. VST transforms signal-dependent Poisson noise to a signal independent Gaussian noise with stable variance. Subsequently, multiscale transforms are employed on the noisy image to segregate signal and noise into separate coefficients. That facilitates the application of local binary hypothesis testing on multiple scales using empirical distribution function (EDF) for the purpose of detection and removal of noise. We demonstrate the effectiveness of the proposed framework with different multiscale transforms and on a wide variety of input datasets. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Pipeline of CMOS imaging acquisition system (adapted from [<a href="#B38-sensors-19-00206" class="html-bibr">38</a>]).</p>
Full article ">Figure 2
<p>CMOS photo-sensing pixel architecture (adapted from [<a href="#B41-sensors-19-00206" class="html-bibr">41</a>]).</p>
Full article ">Figure 3
<p>Noise model of CMOS image sensor (adapted from [<a href="#B42-sensors-19-00206" class="html-bibr">42</a>]).</p>
Full article ">Figure 4
<p>Depiction of a simple detection problem where the probability of null hypothesis <math display="inline"><semantics> <mrow> <mi>p</mi> <mrow> <mo stretchy="false">(</mo> <mi>x</mi> <mo stretchy="false">|</mo> <msub> <mi mathvariant="script">H</mi> <mn>0</mn> </msub> <mo stretchy="false">)</mo> </mrow> <mo>=</mo> <mi mathvariant="script">N</mi> <mrow> <mo stretchy="false">(</mo> <mn>0</mn> <mo>,</mo> <msup> <mi>σ</mi> <mn>2</mn> </msup> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math> and probability of alternate hypothesis <math display="inline"><semantics> <mrow> <mi>p</mi> <mrow> <mo stretchy="false">(</mo> <mi>x</mi> <mo stretchy="false">|</mo> <msub> <mi mathvariant="script">H</mi> <mn>1</mn> </msub> <mo stretchy="false">)</mo> </mrow> <mo>=</mo> <mi mathvariant="script">N</mi> <mrow> <mo stretchy="false">(</mo> <mi>μ</mi> <mo>,</mo> <msup> <mi>σ</mi> <mn>2</mn> </msup> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math> are plotted together. Here, (<b>a</b>) show the relationship between error probability regions <math display="inline"><semantics> <mrow> <mi>p</mi> <mo stretchy="false">(</mo> <msub> <mi mathvariant="script">H</mi> <mn>0</mn> </msub> <mo stretchy="false">|</mo> <msub> <mi mathvariant="script">H</mi> <mn>1</mn> </msub> <mo stretchy="false">)</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>p</mi> <mo stretchy="false">(</mo> <msub> <mi mathvariant="script">H</mi> <mn>1</mn> </msub> <mo stretchy="false">|</mo> <msub> <mi mathvariant="script">H</mi> <mn>0</mn> </msub> <mo stretchy="false">)</mo> </mrow> </semantics></math> and the detection probability region <math display="inline"><semantics> <mrow> <mi>p</mi> <mo stretchy="false">(</mo> <msub> <mi mathvariant="script">H</mi> <mn>1</mn> </msub> <mo stretchy="false">|</mo> <msub> <mi mathvariant="script">H</mi> <mn>1</mn> </msub> <mo stretchy="false">)</mo> </mrow> </semantics></math>; (<b>b</b>,<b>c</b>) highlight the trade off between <math display="inline"><semantics> <msub> <mi>P</mi> <mrow> <mi>f</mi> <mi>a</mi> </mrow> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>P</mi> <mi>d</mi> </msub> </semantics></math> with an increase in threshold value <math display="inline"><semantics> <mi>λ</mi> </semantics></math>.</p>
Full article ">Figure 5
<p>Block diagram of the proposed framework for Poisson and Poisson–Gaussian denoising using detection theory.</p>
Full article ">Figure 6
<p>Hypothesis testing based signal and noise detection: <math display="inline"><semantics> <mrow> <msubsup> <mi mathvariant="script">F</mi> <mn>0</mn> <mrow> <mo stretchy="false">(</mo> <mi>k</mi> <mo stretchy="false">)</mo> </mrow> </msubsup> <mrow> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math> is reference Gaussian CDF plotted along with of <math display="inline"><semantics> <mrow> <msubsup> <mi mathvariant="script">F</mi> <mi>s</mi> <mi mathvariant="bold">i</mi> </msubsup> <mrow> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math> of signal coefficients (dashed line) and the EDF <math display="inline"><semantics> <mrow> <msubsup> <mi mathvariant="script">F</mi> <mi>η</mi> <mi mathvariant="bold">i</mi> </msubsup> <mrow> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math> of noise coefficients (dotted line).</p>
Full article ">Figure 7
<p>Input (2D) signals or images used for experimentation in this work including (<b>a</b>) ‘Lena’ image, (<b>b</b>) ‘Padma River’ image and (<b>c</b>) ‘Ogden Valley’ image.</p>
Full article ">Figure 8
<p>Poisson denoising results on the ‘Lena’ image by various methods at signal peak <math display="inline"><semantics> <mrow> <mo>=</mo> <mn>20</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p>Poisson denoising results on the ‘Padma River’ image by various methods at signal peak <math display="inline"><semantics> <mrow> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 10
<p>Poisson–Gaussian denoising results on the ‘Ogden Valley’ image by various methods where noisy image is corrupted by Poisson noise at signal peak <math display="inline"><semantics> <mrow> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math> &amp; <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> (second row) and at signal peak <math display="inline"><semantics> <mrow> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math> &amp; <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>5</mn> </mrow> </semantics></math> (third row).</p>
Full article ">Figure 11
<p>Performance analysis of the proposed <span class="html-italic">GAT-AD-DTCWT</span> on a noisy image obtained from RENOIR dataset [<a href="#B58-sensors-19-00206" class="html-bibr">58</a>] which contains noisy images from CMOS sensors corrupted by real sensor noise.</p>
Full article ">
10 pages, 2669 KiB  
Article
In-Fiber Mach-Zehnder Interferometer Based on Three-Core Fiber for Measurement of Directional Bending
by Lei Ding, Yu Li, Cai Zhou, Min Hu, Yuli Xiong and Zhongliang Zeng
Sensors 2019, 19(1), 205; https://doi.org/10.3390/s19010205 - 8 Jan 2019
Cited by 21 | Viewed by 4997
Abstract
A highly sensitive directional bending sensor based on a three-core fiber (TCF) Mach-Zehnder interferometer (MZI) is presented in this study. This MZI-based bending sensor was fabricated by fusion-splicing a section of TCF between two single-mode fibers (SMF) with core-offset. Due to the location [...] Read more.
A highly sensitive directional bending sensor based on a three-core fiber (TCF) Mach-Zehnder interferometer (MZI) is presented in this study. This MZI-based bending sensor was fabricated by fusion-splicing a section of TCF between two single-mode fibers (SMF) with core-offset. Due to the location of the core in the TCF, a bend applied to the TCF-based MZI led to an elongation or shortening of the core, which makes the sensor suitable for directional bending measurement. To analyze the bending characteristics, two types of TCF-based sensors, with the fusion-spliced core located at different positions between the SMFs, were investigated. A swept source was employed in the measurement technique. The experimental results showed that, for the two types of sensors in this setup, the bending sensitivities of the two sensors were 15.36 nm/m−1 and 3.11 nm/m−1 at the bending direction of 0°, and −20.48 nm/m−1 and −5.29 nm/m−1 at the bending direction of 180°. The temperature sensitivities of the two sensors were 0.043 nm/°C and 0.041 nm/°C, respectively. The proposed sensors are compact, versatile, inexpensive to fabricate, and are expected to have potential applications in biomedical sensing. Full article
(This article belongs to the Special Issue Optical Sensing Based on Microscale Devices)
Show Figures

Figure 1

Figure 1
<p>Schematic structure of the TCF-based Mach-Zehnder Interferometer. SMF: Single mode fiber; TCF: Three core fiber; <span class="html-italic">I<sub>co</sub></span>: The light intensity of the core mode; <span class="html-italic">I<sub>cl</sub></span>: The light intensity of the dominant cladding mode; and <span class="html-italic">I</span><sub>1</sub>: The intensity of the interference signal.</p>
Full article ">Figure 2
<p>(<b>a</b>) Microscope image of the cross-sections of the TCF. (<b>b</b>) Schematic of the distribution of the three cores. <span class="html-italic">d</span><sub>1</sub>: The distance between Core 1 and the neutral plane; <span class="html-italic">d</span><sub>2</sub>: The distance between Core 2 and the neutral plane; <span class="html-italic">d</span><sub>3</sub>: The distance between Core 3 and the neutral plane; <span class="html-italic">R<sub>core</sub></span><sub>1</sub>: The diameter of Core 1; <span class="html-italic">R<sub>core</sub></span><sub>2</sub>: The diameter of Core 2; <span class="html-italic">R<sub>core</sub></span><sub>3</sub>: The diameter of Core 3; <span class="html-italic">n<sub>co</sub></span>: The refractive indexes of the core mode; <span class="html-italic">n<sub>cl</sub></span>: The refractive indexes of the cladding mode.</p>
Full article ">Figure 3
<p>Schematic view of a curved TCF-based Mach-Zehnder Interferometer (MZI). SMF: Tingle mode fiber; TCF: Three core fiber; S<sub>1</sub>: The SMF-TCF interface; S<sub>2</sub>: The TCF-SMF interface; <span class="html-italic">d</span>: The distance between the eccentric core and the neutral plane; <span class="html-italic">R</span>: The radius of curvature.</p>
Full article ">Figure 4
<p>Experimental setup for directional bending measurement under different directions and curvatures. PC: Polarization controller; OSA: Optical spectrum analyzer.</p>
Full article ">Figure 5
<p>Transmission spectrum of the TCF-based MZI (<b>a</b>) sensor 1 and (<b>b</b>) sensor 2.</p>
Full article ">Figure 6
<p>Spectra of TCF-based sensor 1 under different curvatures for bending directions of (<b>a</b>) 0°, (<b>b</b>) 90°, and (<b>c</b>) 180°. BD: Bending direction.</p>
Full article ">Figure 7
<p>Spectra of TCF-based sensor 2 under different curvatures for bending directions of (<b>a</b>) 0°, (<b>b</b>) 90°, and (<b>c</b>) 180°. BD: Bending direction.</p>
Full article ">Figure 8
<p>Wavelength shifting variation in the interference dip under different curvatures for bending directions of 0°, 90°, and 180°: (<b>a</b>) sensor 1 and (<b>b</b>) sensor 2.</p>
Full article ">Figure 9
<p>Wavelengths of interference minima D<sub>1</sub> and P<sub>1</sub> versus temperature.</p>
Full article ">
13 pages, 2717 KiB  
Article
Deep Belief Network for Spectral–Spatial Classification of Hyperspectral Remote Sensor Data
by Chenming Li, Yongchang Wang, Xiaoke Zhang, Hongmin Gao, Yao Yang and Jiawei Wang
Sensors 2019, 19(1), 204; https://doi.org/10.3390/s19010204 - 8 Jan 2019
Cited by 57 | Viewed by 6812
Abstract
With the development of high-resolution optical sensors, the classification of ground objects combined with multivariate optical sensors is a hot topic at present. Deep learning methods, such as convolutional neural networks, are applied to feature extraction and classification. In this work, a novel [...] Read more.
With the development of high-resolution optical sensors, the classification of ground objects combined with multivariate optical sensors is a hot topic at present. Deep learning methods, such as convolutional neural networks, are applied to feature extraction and classification. In this work, a novel deep belief network (DBN) hyperspectral image classification method based on multivariate optical sensors and stacked by restricted Boltzmann machines is proposed. We introduced the DBN framework to classify spatial hyperspectral sensor data on the basis of DBN. Then, the improved method (combination of spectral and spatial information) was verified. After unsupervised pretraining and supervised fine-tuning, the DBN model could successfully learn features. Additionally, we added a logistic regression layer that could classify the hyperspectral images. Moreover, the proposed training method, which fuses spectral and spatial information, was tested over the Indian Pines and Pavia University datasets. The advantages of this method over traditional methods are as follows: (1) the network has deep structure and the ability of feature extraction is stronger than traditional classifiers; (2) experimental results indicate that our method outperforms traditional classification and other deep learning approaches. Full article
(This article belongs to the Special Issue Multi-Sensor Fusion and Data Analysis)
Show Figures

Figure 1

Figure 1
<p>Architecture of a deep belief network (DBN).</p>
Full article ">Figure 2
<p>Process of hyperspectral classification based on DBN.</p>
Full article ">Figure 3
<p>Pavia University representing nine classes.</p>
Full article ">Figure 4
<p>Indian Pines representing 16 classes.</p>
Full article ">Figure 5
<p>Effect of principal components (SC–DBN classifier).</p>
Full article ">Figure 6
<p>Spatial information-dominated classification result for Pavia University (<b>a</b>) and Indian Pines (<b>b</b>).</p>
Full article ">Figure 7
<p>Effect of principal components (JSSC–DBN classifier).</p>
Full article ">Figure 8
<p>Joint-dominated classification result for Pavia University (<b>a</b>) and Indian Pines (<b>b</b>).</p>
Full article ">
15 pages, 1306 KiB  
Article
Wireless Sensor Networks Intrusion Detection Based on SMOTE and the Random Forest Algorithm
by Xiaopeng Tan, Shaojing Su, Zhiping Huang, Xiaojun Guo, Zhen Zuo, Xiaoyong Sun and Longqing Li
Sensors 2019, 19(1), 203; https://doi.org/10.3390/s19010203 - 8 Jan 2019
Cited by 146 | Viewed by 9716
Abstract
With the wide application of wireless sensor networks in military and environmental monitoring, security issues have become increasingly prominent. Data exchanged over wireless sensor networks is vulnerable to malicious attacks due to the lack of physical defense equipment. Therefore, corresponding schemes of intrusion [...] Read more.
With the wide application of wireless sensor networks in military and environmental monitoring, security issues have become increasingly prominent. Data exchanged over wireless sensor networks is vulnerable to malicious attacks due to the lack of physical defense equipment. Therefore, corresponding schemes of intrusion detection are urgently needed to defend against such attacks. Considering the serious class imbalance of the intrusion dataset, this paper proposes a method of using the synthetic minority oversampling technique (SMOTE) to balance the dataset and then uses the random forest algorithm to train the classifier for intrusion detection. The simulations are conducted on a benchmark intrusion dataset, and the accuracy of the random forest algorithm has reached 92.39%, which is higher than other comparison algorithms. After oversampling the minority samples, the accuracy of the random forest combined with the SMOTE has increased to 92.57%. This shows that the proposed algorithm provides an effective solution to solve the problem of class imbalance and improves the performance of intrusion detection. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) The data sample <span class="html-italic">X</span> and its five nearest neighbors; and (<b>b</b>) interpolation principle of SMOTE.</p>
Full article ">Figure 2
<p>(<b>a</b>) Imbalanced dataset; and (<b>b</b>) entirely oversampling.</p>
Full article ">Figure 3
<p>The architecture of intrusion detection system.</p>
Full article ">Figure 4
<p>Distribution of various types of data in the dataset.</p>
Full article ">Figure 5
<p>(<b>a</b>) The training time and testing time of each classifier; and (<b>b</b>) the accuracy of each classifier.</p>
Full article ">Figure 6
<p>(<b>a</b>) The training time and testing time of each classifier combined with the SMOTE; and (<b>b</b>) the accuracy of each classifier combined with SMOTE.</p>
Full article ">Figure 7
<p>Comparison of the performance of J48, RandomForest, S+J48, and S+RandomForest under different proportions of datasets.</p>
Full article ">
19 pages, 379 KiB  
Review
“Statistics 103” for Multitarget Tracking
by Ronald Mahler
Sensors 2019, 19(1), 202; https://doi.org/10.3390/s19010202 - 8 Jan 2019
Cited by 13 | Viewed by 4367
Abstract
The finite-set statistics (FISST) foundational approach to multitarget tracking and information fusion was introduced in the mid-1990s and extended in 2001. FISST was devised to be as “engineering-friendly” as possible by avoiding avoidable mathematical abstraction and complexity—and, especially, by avoiding measure theory and [...] Read more.
The finite-set statistics (FISST) foundational approach to multitarget tracking and information fusion was introduced in the mid-1990s and extended in 2001. FISST was devised to be as “engineering-friendly” as possible by avoiding avoidable mathematical abstraction and complexity—and, especially, by avoiding measure theory and measure-theoretic point process (p.p.) theory. Recently, however, an allegedly more general theoretical foundation for multitarget tracking has been proposed. In it, the constituent components of FISST have been systematically replaced by mathematically more complicated concepts—and, especially, by the very measure theory and measure-theoretic p.p.’s that FISST eschews. It is shown that this proposed alternative is actually a mathematical paraphrase of part of FISST that does not correctly address the technical idiosyncrasies of the multitarget tracking application. Full article
19 pages, 4976 KiB  
Article
Wireless Charging Deployment in Sensor Networks
by Wei-Yu Lai and Tien-Ruey Hsiang
Sensors 2019, 19(1), 201; https://doi.org/10.3390/s19010201 - 8 Jan 2019
Cited by 10 | Viewed by 5175
Abstract
Charging schemes utilizing mobile wireless chargers can be applied to prolong the lifespan of a wireless sensor network. In considering charging schemes with mobile chargers, most current studies focus on charging each sensor from a single position, then optimizing the moving paths of [...] Read more.
Charging schemes utilizing mobile wireless chargers can be applied to prolong the lifespan of a wireless sensor network. In considering charging schemes with mobile chargers, most current studies focus on charging each sensor from a single position, then optimizing the moving paths of the chargers. However, in reality, a wireless charger may charge the same sensor from several positions in its path. In this paper we consider this fact and seek to minimize both the number of charging locations and the total required charging time. Two charging plans are developed. The first plan considers the charging time required by each sensor and greedily selects the charging service positions. The second one is a two-phase plan, where the number of charging positions is first minimized, then minimum charging times are assigned to every position according to the charging requirements of the nearby sensors. This paper also corrects a problem neglected by some studies in minimizing the number of charging service positions and further provides a corresponding solution. Empirical studies show that compared with other minimal clique partition (MCP)-based methods, the proposed charging plan may save up to 60% in terms of both the number of charging positions and the total required charging time. Full article
(This article belongs to the Special Issue Algorithm and Distributed Computing for the Internet of Things)
Show Figures

Figure 1

Figure 1
<p>Case where the same number of stops are set up, but the total required charging time is different.</p>
Full article ">Figure 2
<p>Example of charging time allocation.</p>
Full article ">Figure 3
<p>Example of candidate stops setup.</p>
Full article ">Figure 4
<p>Set up locations of charging stops.</p>
Full article ">Figure 5
<p>Intersection region of circles drawn with the locations of sensors <span class="html-italic">s<sub>i</sub></span>, <span class="html-italic">s<sub>j</sub></span>, and <span class="html-italic">s<sub>k</sub></span> as the centres with a radius of 2<span class="html-italic">r</span> could not be covered by two circles with a radius of <span class="html-italic">r</span>.</p>
Full article ">Figure 6
<p>Confined region covering a clique.</p>
Full article ">Figure 7
<p>Clique covered by three circles with a radius of <span class="html-italic">r</span> and with a distance within 2<span class="html-italic">r.</span></p>
Full article ">Figure 8
<p>Probability of requiring to set up more than one charging stop for a clique.</p>
Full article ">Figure 9
<p>Candidate charging stops setup.</p>
Full article ">Figure 10
<p>Illustration showing that a sensor could receive power from at most three charging stops.</p>
Full article ">Figure 11
<p>Number of charging stops.</p>
Full article ">Figure 12
<p>Charging time of charger.</p>
Full article ">Figure 13
<p>Difference ratio of charging time.</p>
Full article ">Figure 14
<p>Charging time and number of stops.</p>
Full article ">
13 pages, 3851 KiB  
Article
Reinforcement Strains in Reinforced Concrete Tensile Members Recorded by Strain Gauges and FBG Sensors: Experimental and Numerical Analysis
by Gintaris Kaklauskas, Aleksandr Sokolov, Regimantas Ramanauskas and Ronaldas Jakubovskis
Sensors 2019, 19(1), 200; https://doi.org/10.3390/s19010200 - 7 Jan 2019
Cited by 40 | Viewed by 6509
Abstract
Experimental and numerical studies have been carried out on reinforced concrete (RC) short tensile specimens. Double pull-out tests employed rectangular RC elements of a length determined not to yield any additional primary cracks. Tests were carried out with tensor strain gauges installed within [...] Read more.
Experimental and numerical studies have been carried out on reinforced concrete (RC) short tensile specimens. Double pull-out tests employed rectangular RC elements of a length determined not to yield any additional primary cracks. Tests were carried out with tensor strain gauges installed within a specially modified reinforcement bar and, alternatively, with fibre Bragg grating based optical sensors. The aim of this paper is to analyse the different experimental setups regarding obtaining more accurate and reliable reinforcement strain distribution data. Furthermore, reinforcement strain profiles obtained numerically using the stress transfer approach and the Model Code 2010 provided bond-slip model were compared against the experimental results. Accurate knowledge of the relation between the concrete and the embedded reinforcement is necessary and lacking to this day for less scattered and reliable prediction of cracking behaviour of RC elements. The presented experimental strain values enable future research on bond interaction. In addition, few double pull-out test results are published when compared to ordinary bond tests of single pull-out tests with embedded reinforcement. The authors summarize the comparison with observations on experimental setups and discuss the findings. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Installation and spacing of strain gauges: (<b>a</b>) Longitudinal layout; (<b>b</b>) sectional view of the reinforcement bar; and (<b>c</b>) strain gauges within one half of the reinforcement bar.</p>
Full article ">Figure 2
<p>Installation and spacing of fiber Bragg grating sensors: (<b>a</b>) Longitudinal layout; (<b>b</b>) sectional view of the reinforcement bar; (<b>c</b>) enlarged view of a fragment of the test specimen.</p>
Full article ">Figure 3
<p>Experimental setups of: (<b>a</b>) Tensor strain gauge test and (<b>b</b>) FBG sensor test.</p>
Full article ">Figure 4
<p>Experimental reinforcement strain distributions along the steel bars at key loadings steps.</p>
Full article ">Figure 5
<p>Stress transfer approach explained through (<b>a</b>) strain, slip, and bond stress variations along the reinforced concrete prism; (<b>b</b>) numerical iterative procedure as defined for half an element.</p>
Full article ">Figure 6
<p>Flowchart of the iterative stress transfer procedure.</p>
Full article ">Figure 7
<p>Predicted versus experimental reinforcement strain profiles: (<b>a</b>) Tensor strain gauge test and (<b>b</b>) FBG sensor test.</p>
Full article ">Figure 8
<p>Estimated bond stress and slip distribution along the test specimens: (<b>a</b>) Strain gauge test and (<b>b</b>) FBG sensor test.</p>
Full article ">
29 pages, 1445 KiB  
Review
A Survey of Energy-Efficient Communication Protocols with QoS Guarantees in Wireless Multimedia Sensor Networks
by Shu Li, Jeong Geun Kim, Doo Hee Han and Kye San Lee
Sensors 2019, 19(1), 199; https://doi.org/10.3390/s19010199 - 7 Jan 2019
Cited by 62 | Viewed by 7068
Abstract
In recent years, wireless multimedia sensor networks (WMSNs) have emerged as a prominent technique for delivering multimedia information such as still images and videos. Being under the great spotlight of research communities, however, multimedia delivery over resource- constraint WMSNs poses great challenges, especially [...] Read more.
In recent years, wireless multimedia sensor networks (WMSNs) have emerged as a prominent technique for delivering multimedia information such as still images and videos. Being under the great spotlight of research communities, however, multimedia delivery over resource- constraint WMSNs poses great challenges, especially in terms of energy efficiency and quality-of-service (QoS) guarantees. In this paper, recent developments in techniques for designing highly energy-efficient and QoS-capable WMSNs are surveyed. We first study the unique characteristics and the relevantly imposed requirements of WMSNs. For each requirement we also summarize their existing solutions. Then we review recent research efforts on energy-efficient and QoS-aware communication protocols, including MAC protocols, with a focus on their prioritization and service differentiation mechanisms and disjoint multipath routing protocols. Full article
(This article belongs to the Special Issue Visual Sensor Networks and Related Applications)
Show Figures

Figure 1

Figure 1
<p>Reliability Provision Methods in wireless multimedia sensor networks (WMSNs).</p>
Full article ">Figure 2
<p>Local Processing in WMSNs.</p>
Full article ">Figure 3
<p>In-network Processing in WMSNs.</p>
Full article ">Figure 4
<p>Field of View of A Camera Sensor.</p>
Full article ">Figure 5
<p>Camera Calibration in WMSNs.</p>
Full article ">Figure 6
<p>Camera Sensor Collaboration Schemes in WMSNs.</p>
Full article ">Figure 7
<p>Characteristics and Requirements of WMSNs Presented as Unibody.</p>
Full article ">Figure 8
<p>Next Hop Selection in DGR.</p>
Full article ">Figure 9
<p>An Example of Z-MHTR.</p>
Full article ">Figure 10
<p>Division of Topology in GEAM.</p>
Full article ">
17 pages, 5328 KiB  
Article
BeiDou Augmented Navigation from Low Earth Orbit Satellites
by Mudan Su, Xing Su, Qile Zhao and Jingnan Liu
Sensors 2019, 19(1), 198; https://doi.org/10.3390/s19010198 - 7 Jan 2019
Cited by 39 | Viewed by 8392
Abstract
Currently, the Global Navigation Satellite System (GNSS) mainly uses the satellites in Medium Earth Orbit (MEO) to provide position, navigation, and timing (PNT) service. The weak navigation signals limit its usage in deep attenuation environments, and make it easy to interference and counterfeit [...] Read more.
Currently, the Global Navigation Satellite System (GNSS) mainly uses the satellites in Medium Earth Orbit (MEO) to provide position, navigation, and timing (PNT) service. The weak navigation signals limit its usage in deep attenuation environments, and make it easy to interference and counterfeit by jammers or spoofers. Moreover, being far away to the Earth results in relatively slow motion of the satellites in the sky and geometric change, making long time needed for achieved centimeter positioning accuracy. By using the satellites in Lower Earth Orbit (LEO) as the navigation satellites, these disadvantages can be addressed. In this contribution, the advantages of navigation from LEO constellation has been investigated and analyzed theoretically. The space segment of global Chinese BeiDou Navigation Satellite System consisting of three GEO, three IGSO, and 24 MEO satellites has been simulated with a LEO constellation with 120 satellites in 10 orbit planes with inclination of 55 degrees in a nearly circular orbit (eccentricity about 0.000001) at an approximate altitude of 975 km. With simulated data, the performance of LEO constellation to augment the global Chinese BeiDou Navigation Satellite System (BeiDou-3) has been assessed, as one of the example to show the promising of using LEO as navigation system. The results demonstrate that the satellite visibility and position dilution of precision have been significantly improved, particularly in mid-latitude region of Asia-Pacific region, once the LEO data were combined with BeiDou-3 for navigation. Most importantly, the convergence time for Precise Point Positioning (PPP) can be shorted from about 30 min to 1 min, which is essential and promising for real-time PPP application. Considering there are a plenty of commercial LEO communication constellation with hundreds or thousands of satellites, navigation from LEO will be an economic and promising way to change the heavily relay on GNSS systems. Full article
(This article belongs to the Special Issue High-Precision GNSS in Remote Sensing Applications)
Show Figures

Figure 1

Figure 1
<p>The satellite distribution of the BeiDou-3 and LEO constellation.</p>
Full article ">Figure 2
<p>The distribution of selected ground stations (<b>a</b>) and ground track of LEO constellation (<b>b</b>).</p>
Full article ">Figure 3
<p>Global distribution for satellite visibility of BeiDou-3-only (<b>a</b>) and BeiDou-3/LEO combined constellation (<b>b</b>).</p>
Full article ">Figure 3 Cont.
<p>Global distribution for satellite visibility of BeiDou-3-only (<b>a</b>) and BeiDou-3/LEO combined constellation (<b>b</b>).</p>
Full article ">Figure 4
<p>Global distribution for PDOP of BeiDou-3-only (<b>a</b>) and BeiDou-3/LEO combined constellation (<b>b</b>).</p>
Full article ">Figure 5
<p>Time series of kinematic PPP positioning errors for BeiDou-3 and BeiDou-3 as well as LEO combined solutions. (<b>a</b>) CENT BeiDou-3 only; (<b>b</b>) CENT BeiDou-3 + LEO; (<b>c</b>) POTS BeiDou-3 only; (<b>d</b>) POTS BeiDou-3 + LEO; (<b>e</b>) NTUS BeiDou-3 only; (<b>f</b>) NTUS BeiDou-3 + LEO.</p>
Full article ">Figure 5 Cont.
<p>Time series of kinematic PPP positioning errors for BeiDou-3 and BeiDou-3 as well as LEO combined solutions. (<b>a</b>) CENT BeiDou-3 only; (<b>b</b>) CENT BeiDou-3 + LEO; (<b>c</b>) POTS BeiDou-3 only; (<b>d</b>) POTS BeiDou-3 + LEO; (<b>e</b>) NTUS BeiDou-3 only; (<b>f</b>) NTUS BeiDou-3 + LEO.</p>
Full article ">Figure 5 Cont.
<p>Time series of kinematic PPP positioning errors for BeiDou-3 and BeiDou-3 as well as LEO combined solutions. (<b>a</b>) CENT BeiDou-3 only; (<b>b</b>) CENT BeiDou-3 + LEO; (<b>c</b>) POTS BeiDou-3 only; (<b>d</b>) POTS BeiDou-3 + LEO; (<b>e</b>) NTUS BeiDou-3 only; (<b>f</b>) NTUS BeiDou-3 + LEO.</p>
Full article ">Figure 6
<p>Time series of kinematic PPP positioning errors of CENT station in 1 h after the convergence of PPP for BeiDou-3 (<b>a</b>) and BeiDou-3 as well as LEO combined solutions (<b>b</b>), respectively.</p>
Full article ">
29 pages, 10608 KiB  
Article
Faster R-CNN and Geometric Transformation-Based Detection of Driver’s Eyes Using Multiple Near-Infrared Camera Sensors
by Sung Ho Park, Hyo Sik Yoon and Kang Ryoung Park
Sensors 2019, 19(1), 197; https://doi.org/10.3390/s19010197 - 7 Jan 2019
Cited by 10 | Viewed by 6735
Abstract
Studies are being actively conducted on camera-based driver gaze tracking in a vehicle environment for vehicle interfaces and analyzing forward attention for judging driver inattention. In existing studies on the single-camera-based method, there are frequent situations in which the eye information necessary for [...] Read more.
Studies are being actively conducted on camera-based driver gaze tracking in a vehicle environment for vehicle interfaces and analyzing forward attention for judging driver inattention. In existing studies on the single-camera-based method, there are frequent situations in which the eye information necessary for gaze tracking cannot be observed well in the camera input image owing to the turning of the driver’s head during driving. To solve this problem, existing studies have used multiple-camera-based methods to obtain images to track the driver’s gaze. However, this method has the drawback of an excessive computation process and processing time, as it involves detecting the eyes and extracting the features of all images obtained from multiple cameras. This makes it difficult to implement it in an actual vehicle environment. To solve these limitations of existing studies, this study proposes a method that uses a shallow convolutional neural network (CNN) for the images of the driver’s face acquired from two cameras to adaptively select camera images more suitable for detecting eye position; faster R-CNN is applied to the selected driver images, and after the driver’s eyes are detected, the eye positions of the camera image of the other side are mapped through a geometric transformation matrix. Experiments were conducted using the self-built Dongguk Dual Camera-based Driver Database (DDCD-DB1) including the images of 26 participants acquired from inside a vehicle and the Columbia Gaze Data Set (CAVE-DB) open database. The results confirmed that the performance of the proposed method is superior to those of the existing methods. Full article
(This article belongs to the Special Issue Deep Learning-Based Image Sensors)
Show Figures

Figure 1

Figure 1
<p>Flowchart of the proposed system.</p>
Full article ">Figure 2
<p>Experimental setup in the vehicle environment.</p>
Full article ">Figure 3
<p>Classification of frontal face image. (<b>a</b>) Image captured through the 1st camera and the 2nd camera; (<b>b</b>) Combined three-channel image for the input to shallow CNN; (<b>c</b>) Selected frontal face image.</p>
Full article ">Figure 4
<p>Structure of the shallow CNN.</p>
Full article ">Figure 5
<p>The structure of faster R-CNN.</p>
Full article ">Figure 6
<p>Examples of ROI for the input to faster R-CNN.</p>
Full article ">Figure 7
<p>An example of measuring <span class="html-italic">θ</span> using the center coordinates of left and right eye boxes.</p>
Full article ">Figure 8
<p>Mapping of eye regions using geometric transform.</p>
Full article ">Figure 9
<p>Mapping of eye regions in side face image. (<b>a</b>) Detecting eyes in frontal face image using faster R-CNN; (<b>b</b>) Mapping of eye regions in side face image using geometric transform.</p>
Full article ">Figure 10
<p>15 gaze zones in our experiments.</p>
Full article ">Figure 11
<p>Examples of images in DDCD-DB1. (<b>a</b>) Driver’s open eye images captured through the 1st camera (the 1st row images) and 2nd camera (the 2nd row images); (<b>b</b>) Driver’s close eye images captured through the 1st camera (the 1st row images) and 2nd camera (the 2nd row images).</p>
Full article ">Figure 12
<p>Examples of data augmentation through pixel shifting and cropping.</p>
Full article ">Figure 13
<p>Training accuracy and loss graphs with shallow CNN in (<b>a</b>) the 1st fold validation, and (<b>b</b>) the 2nd fold validation.</p>
Full article ">Figure 14
<p>Examples of correct classification cases of frontal face image. (<b>a</b>) Left image selected; (<b>b</b>) Right image selected.</p>
Full article ">Figure 15
<p>Incorrectly classified cases of frontal face image. (<b>a</b>) Right image selected; (<b>b</b>) Right image selected.</p>
Full article ">Figure 16
<p>Data augmentation through horizontal flipping.</p>
Full article ">Figure 17
<p>Loss graphs during faster R-CNN training in two-fold cross validation. (<b>a</b>) RPN losses from the 1st fold validation; (<b>b</b>) classifier losses from the 1st fold validation; (<b>c</b>) RPN losses from the 2nd fold validation; (<b>d</b>) classifier losses from the 2nd fold validation.</p>
Full article ">Figure 18
<p>Graphs of (<b>a</b>) recall and (<b>b</b>) precision of the detection of open and closed eye according to IOU thresholds.</p>
Full article ">Figure 19
<p>Examples of correctly detected driver’s eye obtained using the proposed method: (<b>a</b>) open eye; (<b>b</b>) closed eye. (Yellow and red rectangles show the ground-truth and detected boxes, respectively).</p>
Full article ">Figure 20
<p>Examples of incorrectly detected eyes obtained using the proposed method: (<b>a</b>) open eye; (<b>b</b>) closed eye. (Yellow and red rectangles show the ground-truth and detected boxes, respectively).</p>
Full article ">Figure 21
<p>Comparative graphs of (<b>a</b>) recall and (<b>b</b>) precision of the proposed and previous methods according to IOU thresholds.</p>
Full article ">Figure 21 Cont.
<p>Comparative graphs of (<b>a</b>) recall and (<b>b</b>) precision of the proposed and previous methods according to IOU thresholds.</p>
Full article ">Figure 22
<p>Various head pose image pairs selected from CAVE-DB.</p>
Full article ">Figure 23
<p>Examples of correct classification of frontal face image. (<b>a</b>) Right image selected; (<b>b</b>) Left image selected.</p>
Full article ">Figure 24
<p>Examples of incorrect classification of frontal face image. (<b>a</b>) Right image selected; (<b>b</b>) Left image selected.</p>
Full article ">Figure 25
<p>Example of correctly detected eyes by proposed method (Yellow and red rectangles show the ground-truth and detected boxes, respectively).</p>
Full article ">Figure 26
<p>Example of incorrectly detected eye by proposed method (Yellow and red rectangles show the ground-truth and detected boxes, respectively).</p>
Full article ">Figure 27
<p>Comparative graphs of (<b>a</b>) recall and (<b>b</b>) precision of the proposed and previous methods according to IOU thresholds.</p>
Full article ">Figure 27 Cont.
<p>Comparative graphs of (<b>a</b>) recall and (<b>b</b>) precision of the proposed and previous methods according to IOU thresholds.</p>
Full article ">
21 pages, 3512 KiB  
Article
Fuzzy Logic-Based Geographic Routing Protocol for Dynamic Wireless Sensor Networks
by Xing Hu, Linhua Ma, Yongqiang Ding, Jin Xu, Yan Li and Shiping Ma
Sensors 2019, 19(1), 196; https://doi.org/10.3390/s19010196 - 7 Jan 2019
Cited by 25 | Viewed by 4661
Abstract
The geographic routing protocol only requires the location information of local nodes for routing decisions, and is considered very efficient in multi-hop wireless sensor networks. However, in dynamic wireless sensor networks, it increases the routing overhead while obtaining the location information of destination [...] Read more.
The geographic routing protocol only requires the location information of local nodes for routing decisions, and is considered very efficient in multi-hop wireless sensor networks. However, in dynamic wireless sensor networks, it increases the routing overhead while obtaining the location information of destination nodes by using a location server algorithm. In addition, the routing void problem and location inaccuracy problem also occur in geographic routing. To solve these problems, a novel fuzzy logic-based geographic routing protocol (FLGR) is proposed. The selection criteria and parameters for the assessment of the next forwarding node are also proposed. In FLGR protocol, the next forward node can be selected based on the fuzzy location region of the destination node. Finally, the feasibility of the FLGR forwarding mode is verified and the performance of FLGR protocol is analyzed via simulation. Simulation results show that the proposed FLGR forwarding mode can effectively avoid the routing void problem. Compared with existing protocols, the FLGR protocol has lower routing overhead, and a higher packet delivery rate in a sparse network. Full article
(This article belongs to the Special Issue Signal and Information Processing in Wireless Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>Fuzzy logic-based geographic routing protocol (FLGR) forwarding mode.</p>
Full article ">Figure 2
<p>Distribution of the nodes in the CNR of node <math display="inline"><semantics> <mrow> <msub> <mi>n</mi> <mn>2</mn> </msub> </mrow> </semantics></math>: (<b>a</b>) Optimal distribution; (<b>b</b>) worst distribution.</p>
Full article ">Figure 3
<p>Subordinating degree function of parameter <math display="inline"><semantics> <mi>D</mi> </semantics></math> with <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>=</mo> <mn>250</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Subordinating degree function of parameter <math display="inline"><semantics> <mi>η</mi> </semantics></math>.</p>
Full article ">Figure 5
<p>Subordinating degree function of parameter <math display="inline"><semantics> <mo>Ω</mo> </semantics></math>.</p>
Full article ">Figure 6
<p>Subordinating degree function of parameter <math display="inline"><semantics> <mi>E</mi> </semantics></math>.</p>
Full article ">Figure 7
<p>Routing void avoidance scheme: (<b>a</b>) In the situation of <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mn>1</mn> </msub> <mo>=</mo> <mo>∅</mo> </mrow> </semantics></math>; (<b>b</b>) in the situation of <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mn>1</mn> </msub> <mo>≠</mo> <mo>∅</mo> <mo>∩</mo> <msub> <mi>Z</mi> <mn>2</mn> </msub> <mo>=</mo> <mo>∅</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>Comparison of average number of hops between FLGR and greedy perimeter stateless routing (GPSR).</p>
Full article ">Figure 9
<p>Impact of node number on packet delivery ratio with <math display="inline"><semantics> <mrow> <msub> <mi>v</mi> <mrow> <mi>max</mi> </mrow> </msub> <mo>=</mo> <mn>40</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 10
<p>Impact of maximum speed on packet delivery ratio with <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>60</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 11
<p>Impact of maximum speed on routing overhead ratio with <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>140</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 12
<p>Impact of node number on routing overhead ratio with <math display="inline"><semantics> <mrow> <msub> <mi>v</mi> <mrow> <mi>max</mi> </mrow> </msub> <mo>=</mo> <mn>20</mn> </mrow> </semantics></math>.</p>
Full article ">
13 pages, 4589 KiB  
Article
Towards Wearable Comprehensive Capture and Analysis of Skeletal Muscle Activity during Human Locomotion
by Christina Zong-Hao Ma, Yan To Ling, Queenie Tsung Kwan Shea, Li-Ke Wang, Xiao-Yun Wang and Yong-Ping Zheng
Sensors 2019, 19(1), 195; https://doi.org/10.3390/s19010195 - 7 Jan 2019
Cited by 23 | Viewed by 7680
Abstract
Background: Motion capture and analyzing systems are essential for understanding locomotion. However, the existing devices are too cumbersome and can be used indoors only. A newly-developed wearable motion capture and measurement system with multiple sensors and ultrasound imaging was introduced in this study. [...] Read more.
Background: Motion capture and analyzing systems are essential for understanding locomotion. However, the existing devices are too cumbersome and can be used indoors only. A newly-developed wearable motion capture and measurement system with multiple sensors and ultrasound imaging was introduced in this study. Methods: In ten healthy participants, the changes in muscle area and activity of gastrocnemius, plantarflexion and dorsiflexion of right leg during walking were evaluated by the developed system and the Vicon system. The existence of significant changes in a gait cycle, comparison of the ankle kinetic data captured by the developed system and the Vicon system, and test-retest reliability (evaluated by the intraclass correlation coefficient, ICC) in each channel’s data captured by the developed system were examined. Results: Moderate to good test-retest reliability of various channels of the developed system (0.512 ≤ ICC ≤ 0.988, p < 0.05), significantly high correlation between the developed system and Vicon system in ankle joint angles (0.638R ≤ 0.707, p < 0.05), and significant changes in muscle activity of gastrocnemius during a gait cycle (p < 0.05) were found. Conclusion: A newly developed wearable motion capture and measurement system with ultrasound imaging that can accurately capture the motion of one leg was evaluated in this study, which paves the way towards real-time comprehensive evaluation of muscles and joint motions during different activities in both indoor and outdoor environments. Full article
(This article belongs to the Special Issue Wearable Sensors for Gait and Motion Analysis 2018)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) An illustration of the wearable mobile sensing system with real-time ultrasound imaging and location of ultrasound probe, electromyography (EMG) electrode, mechanomyography (MMG) electrode, force sensors, and goniometer at shank and foot. (<b>b</b>) A demonstration of a subject wearing the wearable mobile sensing system.</p>
Full article ">Figure 2
<p>Flow chart for data processing and analysis.</p>
Full article ">Figure 3
<p>An example of user-interface (UI) of the mobiles SMG system.</p>
Full article ">Figure 4
<p>Ankle activities in a gait cycle measured by Vicon and the newly developed mobile SMG system of ten healthy subjects. P: P value; R: Pearson’s correlation coefficient; <span class="html-fig-inline" id="sensors-19-00195-i001"> <img alt="Sensors 19 00195 i001" src="/sensors/sensors-19-00195/article_deploy/html/images/sensors-19-00195-i001.png"/></span>: significantly high correlation between the Vicon and mobile SMG systems; <span class="html-fig-inline" id="sensors-19-00195-i002"> <img alt="Sensors 19 00195 i002" src="/sensors/sensors-19-00195/article_deploy/html/images/sensors-19-00195-i002.png"/></span>: significant changes in trend in consecutive 5% intervals; <span class="html-fig-inline" id="sensors-19-00195-i003"> <img alt="Sensors 19 00195 i003" src="/sensors/sensors-19-00195/article_deploy/html/images/sensors-19-00195-i003.png"/></span>: significantly high correlation in intraclass correlation coefficient (ICC) among three trials; The bolded orange line illustrates the averaged data measured by the mobile SMG system; The bolded blue line illustrates the averaged data measured by the Vicon system; The thin dashed line illustrates the standard deviation (SD) of each corresponding bold line.</p>
Full article ">Figure 5
<p>Percentage changes in muscle area in a gait cycle measured by the newly developed mobile SMG system of ten healthy subjects. <span class="html-fig-inline" id="sensors-19-00195-i002"> <img alt="Sensors 19 00195 i002" src="/sensors/sensors-19-00195/article_deploy/html/images/sensors-19-00195-i002.png"/></span>: significant changes in trend in consecutive 5% intervals; <span class="html-fig-inline" id="sensors-19-00195-i003"> <img alt="Sensors 19 00195 i003" src="/sensors/sensors-19-00195/article_deploy/html/images/sensors-19-00195-i003.png"/></span>: significantly high correlation in the intraclass correlation coefficient (ICC) among three trials.</p>
Full article ">
35 pages, 2873 KiB  
Review
An Appraisal of Lung Nodules Automatic Classification Algorithms for CT Images
by Xinqi Wang, Keming Mao, Lizhe Wang, Peiyi Yang, Duo Lu and Ping He
Sensors 2019, 19(1), 194; https://doi.org/10.3390/s19010194 - 7 Jan 2019
Cited by 38 | Viewed by 9640
Abstract
Lung cancer is one of the most deadly diseases around the world representing about 26% of all cancers in 2017. The five-year cure rate is only 18% despite great progress in recent diagnosis and treatment. Before diagnosis, lung nodule classification is a key [...] Read more.
Lung cancer is one of the most deadly diseases around the world representing about 26% of all cancers in 2017. The five-year cure rate is only 18% despite great progress in recent diagnosis and treatment. Before diagnosis, lung nodule classification is a key step, especially since automatic classification can help clinicians by providing a valuable opinion. Modern computer vision and machine learning technologies allow very fast and reliable CT image classification. This research area has become very hot for its high efficiency and labor saving. The paper aims to draw a systematic review of the state of the art of automatic classification of lung nodules. This research paper covers published works selected from the Web of Science, IEEEXplore, and DBLP databases up to June 2018. Each paper is critically reviewed based on objective, methodology, research dataset, and performance evaluation. Mainstream algorithms are conveyed and generic structures are summarized. Our work reveals that lung nodule classification based on deep learning becomes dominant for its excellent performance. It is concluded that the consistency of the research objective and integration of data deserves more attention. Moreover, collaborative works among developers, clinicians, and other parties should be strengthened. Full article
(This article belongs to the Special Issue Biomedical Infrared Imaging: From Sensors to Applications)
Show Figures

Figure 1

Figure 1
<p>Trend of incidence rates for several cancers in the United States from 1975 to 2013. (<b>a</b>) and (<b>b</b>) present the trend on males and females, respectively.</p>
Full article ">Figure 2
<p>Demonstration of the CAD market trend and its market share of lung cancer. (<b>a</b>) prediction of global CAD market, (<b>b</b>) proportion of different CAD systems applications.</p>
Full article ">Figure 3
<p>Problem statements of lung nodule classification in this research.</p>
Full article ">Figure 4
<p>Demonstration of four types of lung nodule CT image, shown from left to right, W, V, J, and P, respectively [<a href="#B12-sensors-19-00194" class="html-bibr">12</a>]. Red circles denote the locations of the nodule.</p>
Full article ">Figure 5
<p>High malignancy suspicious cases are given in (<b>a</b>) and low malignancy suspicious cases are given in (<b>b</b>) [<a href="#B13-sensors-19-00194" class="html-bibr">13</a>]. Red bounding boxes denote the locations of the nodule.</p>
Full article ">Figure 6
<p>Lung CT image [<a href="#B13-sensors-19-00194" class="html-bibr">13</a>]: (<b>a</b>) Original image with nodule (green color), (<b>b</b>) the part of CT image with nodule &gt; 3 mm ROI (green color). Note that the xml file includes the outline of the node (only its boundary points). In this case, the entire nodule is displayed for better understanding and visibility.</p>
Full article ">Figure 7
<p>Demonstration of lung CT images from ELCAP [<a href="#B12-sensors-19-00194" class="html-bibr">12</a>]: (<b>a</b>) the complete CT images, (<b>b</b>,<b>c</b>) are the part of CT scans. The symbol “1” is the location of nodule.</p>
Full article ">Figure 8
<p>Trend of performance for selected papers: (<b>a</b>) the performance of two-type classification. The blue and red boxes indicate the accuracy and AUC, respectively. Each box indicates the worst, best, and median performance. (<b>b</b>) The performance of four-type classification.</p>
Full article ">Figure 9
<p>Trends of the technology used in this field. For the convenience of observation, the 3D feature methods merge into others.</p>
Full article ">
13 pages, 7934 KiB  
Article
Multiple Wire-Mesh Sensors Applied to the Characterization of Two-Phase Flow inside a Cyclonic Flow Distribution System
by César Y. Ofuchi, Henrique K. Eidt, Carolina C. Rodrigues, Eduardo N. Dos Santos, Paulo H. D. Dos Santos, Marco J. Da Silva, Flávio Neves, Jr., Paulo Vinicius S. R. Domingos and Rigoberto E. M. Morales
Sensors 2019, 19(1), 193; https://doi.org/10.3390/s19010193 - 7 Jan 2019
Cited by 18 | Viewed by 5326
Abstract
Wire-mesh sensors are used to determine the phase fraction of gas–liquid two-phase flow in many industrial applications. In this paper, we report the use of the sensor to study the flow behavior inside an offshore oil and gas industry device for subsea phase [...] Read more.
Wire-mesh sensors are used to determine the phase fraction of gas–liquid two-phase flow in many industrial applications. In this paper, we report the use of the sensor to study the flow behavior inside an offshore oil and gas industry device for subsea phase separation. The study focused on the behavior of gas–liquid slug flow inside a flow distribution device with four outlets, which is part of the subsea phase separator system. The void fraction profile and the flow symmetry across the outlets were investigated using tomographic wire-mesh sensors and a camera. Results showed an ascendant liquid film in the cyclonic chamber with the gas phase at the center of the pipe generating a symmetrical flow. Dispersed bubbles coalesced into a gas vortex due to the centrifugal force inside the cyclonic chamber. The behavior favored the separation of smaller bubbles from the liquid bulk, which was an important parameter for gas-liquid separator sizing. The void fraction analysis of the outlets showed an even flow distribution with less than 10% difference, which was a satisfactorily result that may contribute to a reduction on the subsea gas–liquid separators size. From the outcomes of this study, detailed information regarding this type of flow distribution system was extracted. Thereby, wire-mesh sensors were successfully applied to investigate a new type of equipment for the offshore oil and gas industry. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic representation of the flow distribution system.</p>
Full article ">Figure 2
<p>Schematic representation of the measurement plant with the wire-mesh sensors (WMS), ultrasound (US), pressure and temperature sensors (P) and (T).</p>
Full article ">Figure 3
<p>Flow map test setup for slug flow based on [<a href="#B17-sensors-19-00193" class="html-bibr">17</a>].</p>
Full article ">Figure 4
<p>Spatial coordinates of the wire-mesh processing data. Void fraction is represented in the color scale.</p>
Full article ">Figure 5
<p>Signal processing of water/gas slug flow acquired with the wire-mesh sensor: (<b>a</b>) Averaged void fraction time series, (<b>b</b>) cross-sectional and (<b>c</b>) axial cut slice images, and (<b>d</b>) three-dimensional isosurface plot to view the gas-phase boundaries.</p>
Full article ">Figure 6
<p>Slug flow images with wire-mesh and camera: (<b>a</b>) Input and (<b>b</b>) inside the cyclonic chamber.</p>
Full article ">Figure 7
<p>Tomographic wire-mesh data in a dispersed bubbly flow with (<span class="html-italic">J<sub>G</sub></span>) = 0.5 m/s and (<span class="html-italic">J<sub>L</sub></span>) = 1.5 m/s. (<b>a</b>) Dispersed bubble at the input and (<b>b</b>) gas vortex inside the cyclonic chamber.</p>
Full article ">Figure 8
<p>Axial and longitudinal view of void fraction using the WMS in two-phase flow with <span class="html-italic">J<sub>L</sub></span> = 1.0 m/s and <span class="html-italic">J<sub>G</sub></span> = 1.0 m/s. (<b>a</b>) Input, (<b>b</b>) cyclonic chamber, (<b>c</b>) outlet 1 and (<b>d</b>) outlet 2.</p>
Full article ">Figure 9
<p>Axial view of the mean void fraction by WMS for three sets of gas and liquid superficial velocities: (<b>a</b>) Input using 12 × 12 WMS, (<b>b</b>) inside the cyclonic chamber using 12 × 12 WMS, (<b>c</b>) outlet 1 using 4 × 4 WMS, and (<b>d</b>) outlet 2 using 4 × 4 WMS. The bottom of the pipeline is indicated in the outlets.</p>
Full article ">Figure 10
<p>Axial and transversal view of the void fraction, using the wire-mesh sensor in two-phase flow with <span class="html-italic">J<sub>L</sub></span> = 1.0 m/s and <span class="html-italic">J<sub>G</sub></span> = 1.0 m/s. (<b>a</b>) Input and (<b>b</b>–<b>e</b>) outlets 1–4.</p>
Full article ">Figure 11
<p>Void fraction time series (30 s. <span class="html-italic">J<sub>L</sub></span> = 1.5 m/s and <span class="html-italic">J<sub>G</sub></span> = 1.5 m/s): (<b>a</b>) Input and (<b>b</b>–<b>e</b>) outlets 1 to 4.</p>
Full article ">Figure 12
<p>Mean axial void fraction (α<sub>m</sub>) measured in 30 s by WMS in three set of superficial velocities: <span class="html-italic">J<sub>L</sub></span> = 0.5 m/s and <span class="html-italic">J<sub>G</sub></span> = 0.5 m/s; <span class="html-italic">J<sub>L</sub></span> = 1.0 m/s and <span class="html-italic">J<sub>G</sub></span> = 1.0 m/s; and <span class="html-italic">J<sub>L</sub></span> = 1.0 m/s and <span class="html-italic">J<sub>G</sub></span> = 2.0 m/s. (<b>a</b>) Input and (<b>b</b>–<b>e</b>) outlets 1 to 4. WMS data at the outlets are rotated due to a different installation.</p>
Full article ">
11 pages, 2737 KiB  
Article
Robust Entangled-Photon Ghost Imaging with Compressive Sensing
by Jun Li, Wenyu Gao, Jiachuan Qian, Qinghua Guo, Jiangtao Xi and Christian H. Ritz
Sensors 2019, 19(1), 192; https://doi.org/10.3390/s19010192 - 7 Jan 2019
Cited by 15 | Viewed by 5179
Abstract
This work experimentally demonstrates that the imaging quality of quantum ghost imaging (GI) with entangled photons can be significantly improved by properly handling the errors caused by the imperfection of optical devices. We also consider compressive GI to reduce the number of measurements [...] Read more.
This work experimentally demonstrates that the imaging quality of quantum ghost imaging (GI) with entangled photons can be significantly improved by properly handling the errors caused by the imperfection of optical devices. We also consider compressive GI to reduce the number of measurements and thereby the data acquisition time. The image reconstruction is formulated as a sparse total least square problem which is solved with an iterative algorithm. Our experiments show that, compared with existing methods, the new method can achieve a significant performance gain in terms of mean square error and peak signal–noise ratio. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>The experimental schematic of STLS quantum GI: HWP, half-wave plate; BBO, β-barium borate crystal; BS, beam splitter; Random patterns placed on a spatial light modulator (SLM); SPCM, single photon counting modules; C.C, coincidence measurement between SPCM1 and SPCM2.</p>
Full article ">Figure 2
<p>Coordinate descent method for solving STLS.</p>
Full article ">Figure 3
<p>Reconstruction results of object with OMP, GPSR, Method proposed in [<a href="#B19-sensors-19-00192" class="html-bibr">19</a>], STLS (300 samples), and TLS (4500 samples). (<b>a</b>) Original object, (<b>b</b>) OMP result, (<b>c</b>) GPSR result, (<b>d</b>) Method proposed in [<a href="#B19-sensors-19-00192" class="html-bibr">19</a>], (<b>e</b>) TLS result and (<b>f</b>) STLS result.</p>
Full article ">Figure 4
<p>The MSE of the reconstruction image at different iterations when <span class="html-italic">λ</span> = 300, and the sampling number is 300.</p>
Full article ">Figure 5
<p>Experimental platform of our quantum GI, where random patterns are placed on the SLM, and the target is a double-slit.</p>
Full article ">Figure 6
<p>Experimental reconstructed quantum ghost images of the double-slit by (<b>a</b>) OMP, (<b>b</b>) GPSR, (<b>c</b>) Method proposed in [<a href="#B19-sensors-19-00192" class="html-bibr">19</a>] and (<b>d</b>) STLS.</p>
Full article ">Figure 6 Cont.
<p>Experimental reconstructed quantum ghost images of the double-slit by (<b>a</b>) OMP, (<b>b</b>) GPSR, (<b>c</b>) Method proposed in [<a href="#B19-sensors-19-00192" class="html-bibr">19</a>] and (<b>d</b>) STLS.</p>
Full article ">Figure 7
<p>MSE of the reconstructed quantum ghost imaging with OMP, GPSR, Method proposed in [<a href="#B19-sensors-19-00192" class="html-bibr">19</a>] and STLS with different sampling numbers.</p>
Full article ">Figure 8
<p>PSNR of the reconstructed quantum ghost imaging with OMP, GPSR, Method proposed in [<a href="#B19-sensors-19-00192" class="html-bibr">19</a>] and STLS with different sampling numbers.</p>
Full article ">
1 pages, 139 KiB  
Correction
Correction: Yan, Y.; et al. A Dynamic Multi-Projection-Contour Approximating Framework for the 3D Reconstruction of Buildings by Super-Generalized Optical Stereo-Pairs. Sensors 2017, 17, 2153
by Yiming Yan, Nan Su, Chunhui Zhao and Liguo Wang
Sensors 2019, 19(1), 191; https://doi.org/10.3390/s19010191 - 7 Jan 2019
Viewed by 2894
Abstract
The authors wish to make the following corrections to this paper [...] Full article
(This article belongs to the Section Remote Sensors)
23 pages, 4036 KiB  
Article
Removal of Gross Artifacts of Transcranial Alternating Current Stimulation in Simultaneous EEG Monitoring
by Siddharth Kohli and Alexander J. Casson
Sensors 2019, 19(1), 190; https://doi.org/10.3390/s19010190 - 7 Jan 2019
Cited by 36 | Viewed by 8024
Abstract
Transcranial electrical stimulation is a widely used non-invasive brain stimulation approach. To date, EEG has been used to evaluate the effect of transcranial Direct Current Stimulation (tDCS) and transcranial Alternating Current Stimulation (tACS), but most studies have been limited to exploring changes in [...] Read more.
Transcranial electrical stimulation is a widely used non-invasive brain stimulation approach. To date, EEG has been used to evaluate the effect of transcranial Direct Current Stimulation (tDCS) and transcranial Alternating Current Stimulation (tACS), but most studies have been limited to exploring changes in EEG before and after stimulation due to the presence of stimulation artifacts in the EEG data. This paper presents two different algorithms for removing the gross tACS artifact from simultaneous EEG recordings. These give different trade-offs in removal performance, in the amount of data required, and in their suitability for closed loop systems. Superposition of Moving Averages and Adaptive Filtering techniques are investigated, with significant emphasis on verification. We present head phantom testing results for controlled analysis, together with on-person EEG recordings in the time domain, frequency domain, and Event Related Potential (ERP) domain. The results show that EEG during tACS can be recovered free of large scale stimulation artifacts. Previous studies have not quantified the performance of the tACS artifact removal procedures, instead focusing on the removal of second order artifacts such as respiration related oscillations. We focus on the unresolved challenge of removing the first order stimulation artifact, presented with a new multi-stage validation strategy. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>Most combined EEG+tES experimental protocols use only a comparison of EEG data before and after the stimulation due to the presence of stimulation artifacts in the EEG trace during stimulation. Figure originally reported in [<a href="#B12-sensors-19-00190" class="html-bibr">12</a>] (with copyright permission from IEEE).</p>
Full article ">Figure 2
<p>EEG data recorded during tACS stimulation. (<b>a</b>) Stimulation begins at 810 s at which point all recorded data are dominated by a large sinusoidal artifact. Figure originally reported in [<a href="#B12-sensors-19-00190" class="html-bibr">12</a>] (with copyright permission from IEEE). (<b>b</b>) Raw collected EEG+tACS data with no band limiting filters applied. Ongoing artifact is not a pure sinusoid at the simulation frequency, but has an approximately 100 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>V ripple present.</p>
Full article ">Figure 3
<p>Artifact removal algorithms. (<b>a</b>) Superposition of Moving Averages (SMA): A time localized artifact template is generated for each channel and subtracted from the recorded data. Figure originally reported in [<a href="#B12-sensors-19-00190" class="html-bibr">12</a>] (with copyright permission from IEEE). (<b>b</b>) Adaptive Filtering (AF): A recorded version of the tACS output is used to dynamically set the artifact removal filter coefficients.</p>
Full article ">Figure 4
<p>Gelatin phantom head model. (<b>Top</b>) internal electrodes for generating EEG signal, with tACS and EEG electrodes placed on the surface. (<b>Bottom</b>) Recorded EEG after 10 Hz, 1 mA tACS starts at time 30 s. Here a 1 s ramp up is included in the stimulation settings.</p>
Full article ">Figure 5
<p>Overview of experimental protocols. (<b>a</b>) The <span class="html-italic">Pre</span> and <span class="html-italic">Post</span> stages are 30 s long with no stimuli presented. The <span class="html-italic">During</span> stage is a minute long task/stimuli which is presented during tACS stimulation or during a sham where the same task is performed but no tACS is applied. (<b>b</b>) Visual evoked response task protocol. Face and non-face images were shown in a randomized order for 1 s at a time followed by a 1 s pause with a blank screen. This figure shows an example of a randomized face, non-face, sequence. A picture of a famous person was used in the experiment, with the actual image not reproducible here due to copyright constraints.</p>
Full article ">Figure 6
<p>Illustrations of artifact removal from the phantom head. (<b>a</b>) The raw recorded signals and the artifacts which are estimated from these by the two algorithms. The recovered EEG signal compared to the one inputted to the phantom head is also shown. Traces are for a 10 Hz, 0.25 mA stimulation. (<b>b</b>) Example reconstructions for different tACS frequencies. All stimulation amplitudes are 1 mA. For both the SMA and AF approaches the recovered EEG signal visually closely matches that which is inputted into the head model. (<b>c</b>) Frequency domain representation. PSD at 0.25 mA stimulation, with the inputted EEG data split into <span class="html-italic">eyes open</span> and <span class="html-italic">eyes closed</span> periods.</p>
Full article ">Figure 7
<p>SNR between inputted and reconstructed EEG data collected using the phantom head after tACS artifact removal at different tACS stimulation settings.</p>
Full article ">Figure 8
<p>Time domain EEG after artifact removal. (<b>a</b>) The measured artifacts using the two algorithms and the recovered EEG during an eye blink. 10 Hz, 0.25 mA stimulation. (<b>b</b>) EEG data during the alpha task protocol, at PO8, for a single subject showing sham and artifact removed data with tACS at 40 Hz, 1 mA amplitude. Eyes are closed at the 30 s mark. Then bursts of alpha are seen for both sham and stimulation using all three artifact removal approaches. Note that the sham and stimulation are different trial runs and thus the EEG trace for sham and the other figures are not expected to be identical.</p>
Full article ">Figure 9
<p>Power Spectral Density (PSD) data, at PO8, during the alpha task protocol for a single subject with stimulation at 5 Hz (top), 10 Hz (middle) and 40 Hz (bottom), 250 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>A amplitude. The protocol is split into 2 sections, Eyes Open (left) and Eyes Closed (right). An increase in alpha activity (8–12) Hz is seen when the eyes are closed in all cases.</p>
Full article ">Figure 10
<p>Average ERP at PO8 after application of the detection algorithm for sham and stimulation, at 40 Hz 250 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>A stimulation in a single subject. All peaks are detected at the expected times and the expected increases in N170 and N400 depths are seen when face stimuli are presented. Note that the sham and stimulation are different protocols and thus the EEG trace for sham and the other figures are not expected to be identical.</p>
Full article ">
19 pages, 1455 KiB  
Article
Multi-Type Sensor Placements in Gaussian Spatial Fields for Environmental Monitoring
by Chenxi Sun, Yangwen Yu, Victor O. K. Li and Jacqueline C. K. Lam
Sensors 2019, 19(1), 189; https://doi.org/10.3390/s19010189 - 7 Jan 2019
Cited by 11 | Viewed by 5459
Abstract
As citizens are increasingly concerned about the surrounding environment, it is important for modern cities to provide sufficient and accurate environmental information to the public for decision making in the era of smart cities. Due to the limited budget, we often need to [...] Read more.
As citizens are increasingly concerned about the surrounding environment, it is important for modern cities to provide sufficient and accurate environmental information to the public for decision making in the era of smart cities. Due to the limited budget, we often need to optimize the sensor placement in order to maximize the overall information gain according to certain criteria. Existing work is primarily concerned with single-type sensor placement; however, the environment usually requires accurate measurements of multiple types of environmental characteristics. In this paper, we focus on the optimal multi-type sensor placement in Gaussian spatial field for environmental monitoring. We study two representative cases: the one-with-all case when each station is equipped with all types of sensors and the general case when each station is equipped with at least one type of sensor. We propose two greedy algorithms accordingly, each with a provable approximation guarantee. We evaluated the proposed approach via an application in air quality monitoring scenario in Hong Kong and experimental results demonstrate the effectiveness of the proposed approach. Full article
(This article belongs to the Special Issue Selected Papers from ISC2 2018)
Show Figures

Figure 1

Figure 1
<p>Illustration of the challenge of multi-type sensor placement.</p>
Full article ">Figure 2
<p>An example of the multi-type sensor placement scheme.</p>
Full article ">Figure 3
<p>Spatial variations of the air quality measurements at the 16 official monitoring stations in Hong Kong in 2017: the blue circles denote <math display="inline"><semantics> <msub> <mi>NO</mi> <mn>2</mn> </msub> </semantics></math> and the red circles denote <math display="inline"><semantics> <msub> <mi>PM</mi> <mrow> <mn>2.5</mn> </mrow> </msub> </semantics></math>. The size of a circle represents the magnitude of the variance of the corresponding random variable.</p>
Full article ">Figure 4
<p>Histogram of the normalized one-hour difference of the hourly measurements at Tung Chung station over the year 2017.</p>
Full article ">Figure 5
<p>Comparison of information gain of different spatial fields with simple greedy selection.</p>
Full article ">Figure 6
<p>Placement results of 10 sensors in Hong Kong for one-with-all case.</p>
Full article ">Figure 7
<p>Placement results for the general case when the budget is 100, the cost is <math display="inline"><semantics> <mrow> <msub> <mi>c</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>c</mi> <mn>2</mn> </msub> <mo>=</mo> <msub> <mi>c</mi> <mn>3</mn> </msub> <mo>=</mo> <msub> <mi>c</mi> <mn>4</mn> </msub> <mo>=</mo> <msub> <mi>c</mi> <mn>5</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>c</mi> <mrow> <mi>s</mi> <mi>i</mi> <mi>t</mi> <mi>e</mi> </mrow> </msub> <mo>=</mo> <mn>15</mn> </mrow> </semantics></math>. The left figure is the placement result with the cost-effective greedy selection. The right figure is the placement result with the greedy selection.</p>
Full article ">Figure 8
<p>Performance of the hybrid greedy approach with equal weight <math display="inline"><semantics> <msub> <mi>w</mi> <mi>i</mi> </msub> </semantics></math>.</p>
Full article ">Figure 9
<p>Performance of the hybrid greedy approach with a higher weight for <math display="inline"><semantics> <msub> <mi>PM</mi> <mrow> <mn>2.5</mn> </mrow> </msub> </semantics></math>.</p>
Full article ">Figure 10
<p>Greedy vs. lazy greedy for one-with-all case.</p>
Full article ">Figure 11
<p>Greedy vs. lazy greedy for general case.</p>
Full article ">
15 pages, 5199 KiB  
Article
Highly Sensitive Diode-Based Micro-Pirani Vacuum Sensor with Low Power Consumption
by Debo Wei, Jianyu Fu, Ruiwen Liu, Ying Hou, Chao Liu, Weibing Wang and Dapeng Chen
Sensors 2019, 19(1), 188; https://doi.org/10.3390/s19010188 - 7 Jan 2019
Cited by 11 | Viewed by 5078
Abstract
Micro-Pirani vacuum sensors usually operate at hundreds of microwatts, which limits their application in battery-powered sensor systems. This paper reports a diode-based, low power consumption micro-Pirani vacuum sensor that has high sensitivity. Optimizations to the micro-Pirani vacuum sensor were made regarding two aspects. [...] Read more.
Micro-Pirani vacuum sensors usually operate at hundreds of microwatts, which limits their application in battery-powered sensor systems. This paper reports a diode-based, low power consumption micro-Pirani vacuum sensor that has high sensitivity. Optimizations to the micro-Pirani vacuum sensor were made regarding two aspects. On the one hand, a greater temperature coefficient was obtained without increasing power consumption by taking advantage of series diodes; on the other hand, the sensor structure and geometries were redesigned to enlarge temperature variation. After that, the sensor was fabricated and tested. Test results indicated that the dynamic vacuum pressure range of the sensor was from 10−1 to 104 Pa when the forward bias current was as low as 10 μA with a power consumption of 50 μW. Average sensitivity was up to 90 μV/Pa and the sensitivity of unit power consumption increased to 1.8 V/W/Pa. In addition, the sensor could also work at a greater forward bias current for better sensor performance. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Classical design of a micro-hotplate (MHP) in a micro-Pirani vacuum sensor.</p>
Full article ">Figure 2
<p>(<b>a</b>) Temperature variation of a micro-hotplate (MHP) rises with the decrease in vacuum pressure because of the reduced gaseous heat loss, (<b>b</b>) voltage drop variation of six series diodes rises with temperature variation, (<b>c</b>) voltage drop across six series diodes falls with decreasing vacuum pressure as a result of the negative temperature coefficient of diode.</p>
Full article ">Figure 3
<p>Temperature coefficient dependence on (<b>a</b>) forward bias current and (<b>b</b>) number of series diodes.</p>
Full article ">Figure 4
<p>(<b>a</b>) Detailed process of obtaining a greater temperature coefficient with the same power consumption. (<b>b</b>) Temperature coefficient is improved from <span class="html-italic">C</span><sub>0</sub> to <span class="html-italic">C</span><sub>1</sub> by reducing the forward bias current and is further improved from <span class="html-italic">C</span><sub>1</sub> to <span class="html-italic">C</span><sub>2</sub> by increasing the number of series diodes.</p>
Full article ">Figure 5
<p>(<b>a</b>) Schematic and (<b>b</b>) cross-sectional views of the redesigned micro-Pirani vacuum sensor structure.</p>
Full article ">Figure 6
<p>Temperature variation distribution of the micro-Pirani vacuum sensor.</p>
Full article ">Figure 7
<p>Temperature variation (<b>a</b>) is proportional to forward bias current, and cantilever length (<b>b</b>) is inversely proportional to length/width of the MHP and the width/thickness of the cantilever.</p>
Full article ">Figure 8
<p>Fabrication process of the micro-Pirani vacuum sensor.</p>
Full article ">Figure 9
<p>(<b>a</b>) SEM image of six series diodes. (<b>b</b>) Top and (<b>c</b>) cross-sectional views of the fabricated micro-Pirani vacuum sensor.</p>
Full article ">Figure 10
<p>(<b>a</b>) Current–voltage (I–V) characteristics dependence on temperature. (<b>b</b>) Voltage drop dependence on temperature. (<b>c</b>) Temperature coefficient dependence on forward bias current of six series diodes.</p>
Full article ">Figure 11
<p>Voltage drop across six series diodes is a function of vacuum pressure and forward bias current.</p>
Full article ">Figure 12
<p>Sensitivity of the six-series-diode-based micro-Pirani vacuum sensor is proportional to the forward bias current and inversely proportional to vacuum pressure.</p>
Full article ">
Previous Issue
Back to TopTop