Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 19, February-2
Previous Issue
Volume 19, January-2
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 19, Issue 3 (February-1 2019) – 312 articles

Cover Story (view full-size image): Printable electronics (PE) allow for the low-cost and high-volume production of customized electronics devices, which makes PE appealing to a wide range of industries. However, for the fabrication of PE devices at an industrial scale, an in-line quality control tool has to be developed. Following the idea of a color control bar for traditional graphic art printing, we developed a quality control bar made from a terahertz vortex phase plate (VPP) that is able to follow the printed ink condition during production. We experimentally demonstrated by terahertz time-domain spectroscopy that the transmission response of VPP as a function of ink conductivity is consistent and more repeatable than conventional conductivity measurements, e.g., four-point probe. Our results open the door for a simple, non-destructive evaluation strategy for PE devices manufacturing in real-time and in a contactless fashion. View [...] Read more.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
18 pages, 28319 KiB  
Article
FeinPhone: Low-cost Smartphone Camera-based 2D Particulate Matter Sensor
by Matthias Budde, Simon Leiner, Marcel Köpke, Johannes Riesterer, Till Riedel and Michael Beigl
Sensors 2019, 19(3), 749; https://doi.org/10.3390/s19030749 - 12 Feb 2019
Cited by 11 | Viewed by 9702
Abstract
Precise, location-specific fine dust measurement is central for the assessment of urban air quality. Classic measurement approaches require dedicated hardware, of which professional equipment is still prohibitively expensive (>10k$) for dense measurements, and inexpensive sensors do not meet accuracy demands. As a step [...] Read more.
Precise, location-specific fine dust measurement is central for the assessment of urban air quality. Classic measurement approaches require dedicated hardware, of which professional equipment is still prohibitively expensive (>10k$) for dense measurements, and inexpensive sensors do not meet accuracy demands. As a step towards filling this gap, we propose FeinPhone, a phone-based fine dust measurement system that uses camera and flashlight functions that are readily available on today’s off-the-shelf smart phones. We introduce a cost-effective passive hardware add-on together with a novel counting approach based on light-scattering particle sensors. Since our approach features a 2D sensor (the camera) instead of a single photodiode, we can employ it to capture the scatter traces from individual particles rather than just retaining a light intensity sum signal as in simple photometers. This is a more direct way of assessing the particle count, it is robust against side effects, e.g., from camera image compression, and enables gaining information on the size spectrum of the particles. Our proof-of-concept evaluation comparing several FeinPhone sensors with data from a high-quality APS/SMPS (Aerodynamic Particle Sizer/Scanning Mobility Particle Sizer) reference device at the World Calibration Center for Aerosol Physics shows that the collected data shows excellent correlation with the inhalable coarse fraction of fine dust particles (r > 0.9) and can successfully capture its levels under realistic conditions. Full article
(This article belongs to the Special Issue Sensors Signal Processing and Visual Computing)
Show Figures

Figure 1

Figure 1
<p>In light-scattering sensors (a.k.a. nephelometers), light is emitted into a measurement chamber. If there are particles present, the light is refracted and collected by a photodiode.</p>
Full article ">Figure 2
<p>Sketch of the magnifier-based sensor principle: This detector design allows virtual images of the scatter traces that are created by individual particles traveling through the measurement volume.Sketch of the magnifier-based sensor principle</p>
Full article ">Figure 3
<p>Custom light trap design with a mirror to relay the light from the flash LED into the measurement chamber.</p>
Full article ">Figure 4
<p>(<b>a</b>) <span class="html-italic">Galaxy S6</span> phone with <span class="html-italic">FeinPhone</span> sensor prototype. (<b>b</b>) The clip-on modules were 3D printed for rapid prototyping.</p>
Full article ">Figure 5
<p>(<b>a</b>) Scatter blob [<a href="#B29-sensors-19-00749" class="html-bibr">29</a>] vs. (<b>b</b>) individual scatter traces, for illustrative purposes both at high particle concentrations.Scatter patterns: blob v. individual traces</p>
Full article ">Figure 6
<p>Contour Detection Particle Counting: CDPC The original recordings (<b>a</b>) undergo background subtraction, blur and binarization before a contour detection algorithm isolates continuous patches, of which all with an area exceeding a preset threshold are counted (<b>b</b>).</p>
Full article ">Figure 7
<p>Evaluation Setup: The <span class="html-italic">FeinPhone</span> prototypes were placed inside an aluminum measurement chamber (<b>left</b>), into which a varying concentration of polydisperse particles was injected. A custom SMPS and a TSI APS (<b>right</b>) were used for reference measurements.</p>
Full article ">Figure 8
<p>Particle counts obtained from the CDPC algorithm vs. reference signal (PM<math display="inline"><semantics> <msub> <mrow/> <mrow> <mo>(</mo> <mn>10</mn> <mo>−</mo> <mn>2.5</mn> <mo>)</mo> </mrow> </msub> </semantics></math> size fraction) for sensors B005 (<b>left</b>) and B001 (<b>right</b>). CDPC parameters: <math display="inline"><semantics> <msub> <mi>r</mi> <mrow> <mi>l</mi> <mi>e</mi> <mi>a</mi> <mi>r</mi> <mi>n</mi> </mrow> </msub> </semantics></math>: <math display="inline"><semantics> <mrow> <mn>0.1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mo>〈</mo> <msub> <mi>σ</mi> <mrow> <mi>b</mi> <mi>l</mi> <mi>u</mi> <mi>r</mi> </mrow> </msub> <mo>〉</mo> </mrow> </semantics></math>: 9, <math display="inline"><semantics> <msub> <mi>θ</mi> <mrow> <mi>b</mi> <mo>/</mo> <mi>w</mi> </mrow> </msub> </semantics></math>: <math display="inline"><semantics> <mrow> <mn>50.0</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <msub> <mi>θ</mi> <mrow> <mi>c</mi> <mi>o</mi> <mi>n</mi> <mi>t</mi> <mi>o</mi> <mi>u</mi> <mi>r</mi> </mrow> </msub> </semantics></math>: <math display="inline"><semantics> <mrow> <mn>100.0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p>Combined approach: The output from the particle counting (CDPC) algorithm is subsequently piped through the Poisson Particle Detection (PPD). The graphs were shifted to compensate for the time delay caused by the reference method (see above). They show very good qualitative agreement with the PM<math display="inline"><semantics> <msub> <mrow/> <mrow> <mo>(</mo> <mn>10</mn> <mo>−</mo> <mn>2.5</mn> <mo>)</mo> </mrow> </msub> </semantics></math> size fraction of the reference, here shown for sensors B005 (<b>left</b>) and B001 (<b>right</b>).</p>
Full article ">Figure 10
<p>Scatterplots of the reference measurements vs. the results of the CDPC algorithm (<b>left</b>) respectively the combined CDPC/PPD algorithm (<b>right</b>), each for sensor <span class="html-italic">B005</span> (<b>top row</b>), <span class="html-italic">B001</span> (<b>middle row</b>) and <span class="html-italic">B003</span> (<b>bottom row</b>).</p>
Full article ">Figure 11
<p>Results from the unventilated sensor (<span class="html-italic">B003</span>). In our lab experiments, it did not make a notable difference whether the sensor was ventilated or not (cmp. data of sensor <span class="html-italic">B001</span> in <a href="#sensors-19-00749-f008" class="html-fig">Figure 8</a> respectively <a href="#sensors-19-00749-f009" class="html-fig">Figure 9</a>).</p>
Full article ">Figure 12
<p>Due to rapid prototyping with a 3D printer and the manual assembly of the sensor hardware, the measurement chambers were not 100% identical, resulting in different background illumination.</p>
Full article ">
14 pages, 3395 KiB  
Article
Enhanced Auditory Steady-State Response Using an Optimized Chirp Stimulus-Evoked Paradigm
by Xiaoya Liu, Shuang Liu, Dongyue Guo, Yue Sheng, Yufeng Ke, Xingwei An, Feng He and Dong Ming
Sensors 2019, 19(3), 748; https://doi.org/10.3390/s19030748 - 12 Feb 2019
Cited by 10 | Viewed by 5326
Abstract
Objectives: It has been reported recently that gamma measures of the electroencephalogram (EEG) might provide information about the candidate biomarker of mental diseases like schizophrenia, Alzheimer’s disease, affective disorder and so on, but as we know it is a difficult issue to [...] Read more.
Objectives: It has been reported recently that gamma measures of the electroencephalogram (EEG) might provide information about the candidate biomarker of mental diseases like schizophrenia, Alzheimer’s disease, affective disorder and so on, but as we know it is a difficult issue to induce visual and tactile evoked responses at high frequencies. Although a high-frequency response evoked by auditory senses is achievable, the quality of the recording response is not ideal, such as relatively low signal-to-noise ratio (SNR). Recently, auditory steady-state responses (ASSRs) play an essential role in the field of basic auditory studies and clinical uses. However, how to improve the quality of ASSRs is still a challenge which researchers have been working on. This study aims at designing a more comfortable and suitable evoked paradigm and then enhancing the quality of the ASSRs in healthy subjects so as to further apply it in clinical practice. Methods: Chirp and click stimuli with 40 Hz and 60 Hz were employed to evoke the gamma-ASSR respectively, and the sound adjusted to 45 dB sound pressure level (SPL). Twenty healthy subjects with normal-hearing participated, and 64-channel EEGs were simultaneously recorded during the experiment. Event-related spectral perturbation (ERSP) and SNR of the ASSRs were measured and analyzed to verify the feasibility and adaptability of the proposed evoked paradigm. Results: The results showed that the evoked paradigm proposed in this study could enhance ASSRs with strong feasibility and adaptability. (1) ASSR waves in time domain indicated that 40 Hz stimuli could significantly induce larger peak-to-peak values of ASSRs compared to 60 Hz stimuli (p < 0.01**); ERSP showed that obvious ASSRs were obtained at each lead for both 40 Hz and 60 Hz, as well as the click and chirp stimuli. (2) The SNR of the ASSRs were –3.23 ± 1.68, –2.44 ± 2.90, –4.66 ± 2.09, and –3.53 ± 3.49 respectively for 40 Hz click, 40 Hz chirp, 60 Hz click and 60 Hz chirp, indicating the chirp stimuli could induce significantly better ASSR than the click, and 40 Hz ASSRs had the higher SNR than 60 Hz (p < 0.01**). Limitation: In this study, sample size was small and the age span was not large enough. Conclusions: This study verified the feasibility and adaptability of the proposed evoked paradigm to improve the quality of the gamma-ASSR, which is significant in clinical application. The results suggested that 40 Hz ASSR evoked by chirp stimuli had the best performance and was expected to be used in clinical practice, especially in the field of mental diseases such as schizophrenia, Alzheimer’s disease, and affective disorder. Full article
(This article belongs to the Special Issue Neurophysiological Data Denoising and Enhancement)
Show Figures

Figure 1

Figure 1
<p>The schematic representation of the click and chirp stimuli: (<b>a</b>) the waveform of the chirp stimuli; (<b>b</b>) the waveform of the click stimuli. 40 Hz stimuli waveform at the top and 60 Hz stimuli waveform at the bottom in each subgraph.</p>
Full article ">Figure 2
<p>The flow chart of auditory evoked paradigm in this study: each frequency is represented 28 times in each block.</p>
Full article ">Figure 3
<p>The scoring criteria of the degree of tolerance for click stimuli and chirp stimuli in the evoked paradigm.</p>
Full article ">Figure 4
<p>Mean time-domain waveforms of auditory steady-state responses (ASSRs) for 20 healthy subjects under selected leads: (<b>a</b>) Mean ASSR amplitude: the black line represents ASSR amplitude within 1s after stimulus, and the red line represents ASSR amplitude within 0.5 s after stimulus; (<b>b</b>) The boxplot of mean ASSR peak-peak value: the red line represents the median; ■ and ° represent outliers; ** represents <span class="html-italic">p</span> ≤ 0.01.</p>
Full article ">Figure 5
<p>The event-related spectral perturbation (ERSP) averaged across all leads from 20 subjects: 40 Hz ERSP at the top and 60 Hz ERSP at the bottom. The left column represents the click stimuli and the right represents chirp stimuli. The dotted black lines represent the stimulus moment, namely the 0 time. The larger the number on the color bar, the stronger the evoked ASSRs response.</p>
Full article ">Figure 6
<p>Mean signal-to-noise (SNR) electroencephalogram across 20 subjects: (<b>a</b>) 40 Hz electroencephalogram at the top and 60 Hz electroencephalogram at the bottom. The left column represents the click stimuli and the right represents chirp stimuli; (<b>b</b>) histogram of the mean SNR of ASSR under the whole leads.</p>
Full article ">Figure 7
<p>Mean SNR histogram at each lead across 20 subjects: (<b>a</b>) the mean SNR histogram of 40 Hz chirp stimulus and 40 Hz click stimulus; (<b>b</b>) the mean SNR histogram of 40 Hz chirp stimulus and 60 Hz chirp stimulus; * represents <span class="html-italic">P</span> ≤ 0.05, ** represents <span class="html-italic">P</span> ≤ 0.01.</p>
Full article ">Figure 7 Cont.
<p>Mean SNR histogram at each lead across 20 subjects: (<b>a</b>) the mean SNR histogram of 40 Hz chirp stimulus and 40 Hz click stimulus; (<b>b</b>) the mean SNR histogram of 40 Hz chirp stimulus and 60 Hz chirp stimulus; * represents <span class="html-italic">P</span> ≤ 0.05, ** represents <span class="html-italic">P</span> ≤ 0.01.</p>
Full article ">
19 pages, 5976 KiB  
Article
Analysis and Evaluation of the Image Preprocessing Process of a Six-Band Multispectral Camera Mounted on an Unmanned Aerial Vehicle for Winter Wheat Monitoring
by Jiale Jiang, Hengbiao Zheng, Xusheng Ji, Tao Cheng, Yongchao Tian, Yan Zhu, Weixing Cao, Reza Ehsani and Xia Yao
Sensors 2019, 19(3), 747; https://doi.org/10.3390/s19030747 - 12 Feb 2019
Cited by 28 | Viewed by 5528
Abstract
Unmanned aerial vehicle (UAV)-based multispectral sensors have great potential in crop monitoring due to their high flexibility, high spatial resolution, and ease of operation. Image preprocessing, however, is a prerequisite to make full use of the acquired high-quality data in practical applications. Most [...] Read more.
Unmanned aerial vehicle (UAV)-based multispectral sensors have great potential in crop monitoring due to their high flexibility, high spatial resolution, and ease of operation. Image preprocessing, however, is a prerequisite to make full use of the acquired high-quality data in practical applications. Most crop monitoring studies have focused on specific procedures or applications, and there has been little attempt to examine the accuracy of the data preprocessing steps. This study focuses on the preprocessing process of a six-band multispectral camera (Mini-MCA6) mounted on UAVs. First, we have quantified and analyzed the components of sensor error, including noise, vignetting, and lens distortion. Next, different methods of spectral band registration and radiometric correction were evaluated. Then, an appropriate image preprocessing process was proposed. Finally, the applicability and potential for crop monitoring were assessed in terms of accuracy by measurement of the leaf area index (LAI) and the leaf biomass inversion under variable growth conditions during five critical growth stages of winter wheat. The results show that noise and vignetting could be effectively removed via use of correction coefficients in image processing. The widely used Brown model was suitable for lens distortion correction of a Mini-MCA6. Band registration based on ground control points (GCPs) (Root-Mean-Square Error, RMSE = 1.02 pixels) was superior to that using PixelWrench2 (PW2) software (RMSE = 1.82 pixels). For radiometric correction, the accuracy of the empirical linear correction (ELC) method was significantly higher than that of light intensity sensor correction (ILSC) method. The multispectral images that were processed using optimal correction methods were demonstrated to be reliable for estimating LAI and leaf biomass. This study provides a feasible and semi-automatic image preprocessing process for a UAV-based Mini-MCA6, which also serves as a reference for other array-type multispectral sensors. Moreover, the high-quality data generated in this study may stimulate increased interest in remote high-efficiency monitoring of crop growth status. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>A photo of the Tetracam Mini-MCA6. ILS, incident light sensor.</p>
Full article ">Figure 2
<p>A photo of the UAV system equipped with (<b>a</b>) a ThinkPad laptop; (<b>b</b>) a Graupner MC-32 control module; and (<b>c</b>) an ARF-MikroKopter UAV.</p>
Full article ">Figure 3
<p>A photo of the uniform source system (CSTM-USS-1200C; Labsphere, Inc., North Sutton, NH, USA).</p>
Full article ">Figure 4
<p>The distribution and the size of the ground control points (GCPs).</p>
Full article ">Figure 5
<p>The calibration canvas with different reflectance values (<b>a</b>) and the corresponding spectral signatures (<b>b</b>).</p>
Full article ">Figure 6
<p>The correction file from the light intensity sensor.</p>
Full article ">Figure 7
<p>A comparison of the MCA-0 images (<b>a</b>) before and (<b>c</b>) after vignetting correction based on the correction factor image (<b>b</b>).</p>
Full article ">Figure 8
<p>A comparison of the MCA-4 images (<b>a</b>) before and (<b>b</b>) after lens distortion correction at the booting stage.</p>
Full article ">Figure 9
<p>Mini-MCA6 multispectral images from different band registration methods. (<b>a</b>) Original image stacking; (<b>b</b>) PW2-based method; and (<b>c</b>) GCP-based method.</p>
Full article ">Figure 10
<p>The root-mean-square error (RMSE) of mismatched pixels between each slave and the main channel for different band registration methods.</p>
Full article ">Figure 11
<p>Comparison of the analytical spectral device (ASD) reflectance with the calibrated values for five bands at four growth stages with the light intensity sensor correction (ILSC) and empirical linear correction (ELC) radiometric correction methods. Panels (<b>a</b>–<b>b</b>), (<b>c</b>–<b>d</b>), (<b>e</b>–<b>f</b>), (<b>g</b>–<b>h</b>), and (<b>i</b>–<b>j</b>) are the plots for the MAC-1, MCA-0, MAC-2, MAC-3, and MAC-4, respectively. The two columns from left to right correspond to the radiometric correction methods ILSC and ELC, respectively.</p>
Full article ">Figure 12
<p>Leaf area index (LAI) mappings using the relationship between MTVI<sub>2</sub> and LAI at different growth stages: (<b>a</b>) jointing stage; (<b>b</b>) booting stage; (<b>c</b>) heading stage; (<b>d</b>) anthesis stage; and (<b>e</b>) filling stage. N is nitrogen rate (N0 = 0, N1 = 150, N2 = 300 kg/ha). D is row spacing (D1 = 25, D2 = 40 cm). V represents the wheat varieties Shengxuan 6 (V1) and Yangmai 18 (V2).</p>
Full article ">Figure 13
<p>Leaf biomass mappings using the relationship between MTVI<sub>2</sub> and leaf biomass at different growth stages: (<b>a</b>) jointing stage; (<b>b</b>) booting stage; (<b>c</b>) heading stage; (<b>d</b>) anthesis stage; and (<b>e</b>) filling stage. N is nitrogen rate (N0 = 0, N1 = 150, N2 = 300 kg/ha). D is row spacing (D1 = 25, D2 = 40 cm). V represents the wheat varieties Shengxuan 6 (V1) and Yangmai 18 (V2).</p>
Full article ">
14 pages, 19797 KiB  
Article
Three-Dimensional Visualization System with Spatial Information for Navigation of Tele-Operated Robots
by Seung-Hun Kim, Chansung Jung and Jaeheung Park
Sensors 2019, 19(3), 746; https://doi.org/10.3390/s19030746 - 12 Feb 2019
Cited by 3 | Viewed by 4482
Abstract
This study describes a three-dimensional visualization system with spatial information for the effective control of a tele-operated robot. The environmental visualization system for operating the robot is very important. The tele-operated robot performs tasks in a disaster area that is not accessible to [...] Read more.
This study describes a three-dimensional visualization system with spatial information for the effective control of a tele-operated robot. The environmental visualization system for operating the robot is very important. The tele-operated robot performs tasks in a disaster area that is not accessible to humans. The visualization system should perform in real-time to cope with rapidly changing situations. The visualization system should also provide accurate and high-level information so that the tele-operator can make the right decisions. The proposed system consists of four fisheye cameras and a 360° laser scanner. When the robot moves to the unknown space, a spatial model is created using the spatial information data of the laser scanner, and a single-stitched image is created using four images from cameras and mapped in real-time. The visualized image contains the surrounding spatial information; hence, the tele-operator can not only grasp the surrounding space easily, but also knows the relative position of the robot in space. In addition, it provides various angles of view without moving the robot or sensor, thereby coping with various situations. The experimental results show that the proposed method has a more natural appearance than the conventional methods. Full article
(This article belongs to the Special Issue Sensors Signal Processing and Visual Computing)
Show Figures

Figure 1

Figure 1
<p>Demonstration of the wrap-around view monitor (WAVM).</p>
Full article ">Figure 2
<p>Schematic of the proposed system. RoI: region of interest.</p>
Full article ">Figure 3
<p>Fisheye camera image with distortion.</p>
Full article ">Figure 4
<p>The spherical geometric model.</p>
Full article ">Figure 5
<p>Undistorted image.</p>
Full article ">Figure 6
<p>Cropped and warped images of RoIs A and B in <a href="#sensors-19-00746-f005" class="html-fig">Figure 5</a>.</p>
Full article ">Figure 7
<p>Blended and stitched RoI A and B images using the four images in <a href="#sensors-19-00746-f006" class="html-fig">Figure 6</a>.</p>
Full article ">Figure 8
<p>Coordinate representation according to spatial information.</p>
Full article ">Figure 9
<p>Splitting image <span class="html-italic">I</span> with three-dimensional points by Equations (7)–(10).</p>
Full article ">Figure 10
<p>Splitting wall and floor images in <span class="html-italic">I</span><span class="html-italic"><sub>n</sub></span>. The upper region (blue line) of the divided image is mapped to the wall, while the lower region (red line) is mapped to the blank floor.</p>
Full article ">Figure 11
<p>An example of mapping the stitched RoI A and B images to the spatial model. The regions of the yellow and blue lines are the RoI B and RoI A images in <a href="#sensors-19-00746-f007" class="html-fig">Figure 7</a>. The region of the red line is the same as the region of the red line in <a href="#sensors-19-00746-f010" class="html-fig">Figure 10</a>.</p>
Full article ">Figure 12
<p>Results of mapping the stitched image to the spatial model in the corridor.</p>
Full article ">Figure 13
<p>Configuration of four cameras and a 360° laser scanner on the robot.</p>
Full article ">Figure 14
<p>Robot’s location and direction (<b>A–C</b>) in the corridor drawing.</p>
Full article ">Figure 15
<p>Comparison of the experimental results between the existing WAVM (left) [<a href="#B10-sensors-19-00746" class="html-bibr">10</a>] and the proposed system (right) at locations (<b>A–C</b>) in <a href="#sensors-19-00746-f014" class="html-fig">Figure 14</a>.</p>
Full article ">Figure 15 Cont.
<p>Comparison of the experimental results between the existing WAVM (left) [<a href="#B10-sensors-19-00746" class="html-bibr">10</a>] and the proposed system (right) at locations (<b>A–C</b>) in <a href="#sensors-19-00746-f014" class="html-fig">Figure 14</a>.</p>
Full article ">Figure 16
<p>Result comparisons of four methods in the same viewpoint. (<b>a</b>) Ground truth, (<b>b</b>) SLAM [<a href="#B17-sensors-19-00746" class="html-bibr">17</a>,<a href="#B18-sensors-19-00746" class="html-bibr">18</a>,<a href="#B19-sensors-19-00746" class="html-bibr">19</a>], (<b>c</b>) existing WAVM [<a href="#B10-sensors-19-00746" class="html-bibr">10</a>], (<b>d</b>) proposed system.</p>
Full article ">Figure 17
<p>Resulting images in various and complex environments.</p>
Full article ">Figure 17 Cont.
<p>Resulting images in various and complex environments.</p>
Full article ">
17 pages, 5915 KiB  
Article
Research on Damage Detection of a 3D Steel Frame Model Using Smartphones
by Botao Xie, Jinke Li and Xuefeng Zhao
Sensors 2019, 19(3), 745; https://doi.org/10.3390/s19030745 - 12 Feb 2019
Cited by 20 | Viewed by 4500
Abstract
Smartphones which are built into the suite of sensors, network transmission, data storage, and embedded processing capabilities provide a wide range of response measurement opportunities for structural health monitoring (SHM). The objective of this work was to evaluate and validate the use of [...] Read more.
Smartphones which are built into the suite of sensors, network transmission, data storage, and embedded processing capabilities provide a wide range of response measurement opportunities for structural health monitoring (SHM). The objective of this work was to evaluate and validate the use of smartphones for monitoring damage states in a three-dimensional (3D) steel frame structure subjected to shaking table earthquake excitation. The steel frame is a single-layer structure with four viscous dampers mounted at the beam-column joints to simulate different damage states at their respective locations. The structural acceleration and displacement responses of undamaged and damaged frames were obtained simultaneously by using smartphones and conventional sensors, while the collected response data were compared. Since smartphones can be used to monitor 3D acceleration in a given space and biaxial displacement in a given plane, the acceleration and displacement responses of the Y-axis of the model structure were obtained. Wavelet packet decomposition and relative wavelet entropy (RWE) were employed to analyze the acceleration data to detect damage. The results show that the acceleration responses that were monitored by the smartphones are well matched with the traditional sensors and the errors are generally within 5%. The comparison of the displacement acquired by smartphones and laser displacement sensors is basically good, and error analysis shows that smartphones with a displacement response sampling rate of 30 Hz are more suitable for monitoring structures with low natural frequencies. The damage detection using two kinds of sensors are relatively good. However, the asymmetry of the structure’s spatial stiffness will lead to greater RWE value errors being obtained from the smartphones monitoring data. Full article
(This article belongs to the Special Issue Smart Sensors for Structural Health Monitoring)
Show Figures

Figure 1

Figure 1
<p>Schematic illustration of the test frame model.</p>
Full article ">Figure 2
<p>The various sensors instrumented in different locations in the steel frame testbed are shown.</p>
Full article ">Figure 3
<p>Schematic illustration of an earthquake wave.</p>
Full article ">Figure 4
<p>The acceleration time-history responses, as collected by the piezoelectric accelerometers (PAs) and smartphones (SPs) for four damage cases subjected to Nr-1 cm excitation, are compared. Representative results from (<b>a</b>) frame 1 and (<b>b</b>) frame 2 in the undamaged and damaged 1 cases, as well as (<b>c</b>) frame 1 and (<b>d</b>) frame 2 in the damaged 2 and damaged 3 cases are overlaid.</p>
Full article ">Figure 5
<p>Using the acceleration time-history measurements from <a href="#sensors-19-00745-f004" class="html-fig">Figure 4</a>, the power spectral density functions of (<b>a</b>) frame 1 and (<b>b</b>) frame 2 in the undamaged and damaged 1 cases, as well as (<b>c</b>) frame 1 and (<b>d</b>) frame 2 in the damaged 2 and damaged 3 cases are compared.</p>
Full article ">Figure 5 Cont.
<p>Using the acceleration time-history measurements from <a href="#sensors-19-00745-f004" class="html-fig">Figure 4</a>, the power spectral density functions of (<b>a</b>) frame 1 and (<b>b</b>) frame 2 in the undamaged and damaged 1 cases, as well as (<b>c</b>) frame 1 and (<b>d</b>) frame 2 in the damaged 2 and damaged 3 cases are compared.</p>
Full article ">Figure 6
<p>The Y-axis acceleration time-history responses as collected by smartphones subjected to Nr-1 cm are shown (<b>a</b>) in the undamaged and damaged 1 cases and (<b>b</b>) in the damaged 2 and 3 cases.</p>
Full article ">Figure 7
<p>The displacement time-history responses as collected by the LDSs (laser displacement sensors) and SPs with Nr-1 cm excitation for four damage cases are compared and found to show good agreement. Representative results are shown from (<b>a</b>) frame 1 and (<b>b</b>) frame 2 in the undamaged case and damaged 1 case, as well as from (<b>c</b>) frame 1 and (<b>d</b>) frame 2 in the damaged 2 and 3 cases.</p>
Full article ">Figure 8
<p>Displacement of the frame in the Y direction for (<b>a</b>) the Northridge-1 cm case and (<b>b</b>) the Northridge-2 cm case.</p>
Full article ">Figure 9
<p>The displacement time-history responses as collected by SPs compared with LDSs using different sampling rates of (<b>a</b>) 100 Hz, (<b>b</b>) 50 Hz, and (<b>c</b>) 25 Hz. (<b>d</b>) A comparison of all sampling rates.</p>
Full article ">Figure 10
<p><span class="html-italic">p<sub>j</sub></span> calculated for (<b>a</b>) frame 1 and (<b>b</b>) frame 2 in the damaged 1 case, (<b>c</b>) frame 1 and (<b>d</b>) frame 2 in the damaged 2 case, and (<b>e</b>) frame 1 and (<b>f</b>) frame 2 in the damaged 3 case when structure is subjected to Nr-1 cm excitation.</p>
Full article ">Figure 10 Cont.
<p><span class="html-italic">p<sub>j</sub></span> calculated for (<b>a</b>) frame 1 and (<b>b</b>) frame 2 in the damaged 1 case, (<b>c</b>) frame 1 and (<b>d</b>) frame 2 in the damaged 2 case, and (<b>e</b>) frame 1 and (<b>f</b>) frame 2 in the damaged 3 case when structure is subjected to Nr-1 cm excitation.</p>
Full article ">
34 pages, 22213 KiB  
Article
A Knowledge-Driven Approach for 3D High Temporal-Spatial Measurement of an Arbitrary Contouring Error of CNC Machine Tools Using Monocular Vision
by Xiao Li, Wei Liu, Yi Pan, Jianwei Ma and Fuji Wang
Sensors 2019, 19(3), 744; https://doi.org/10.3390/s19030744 - 12 Feb 2019
Cited by 15 | Viewed by 5520
Abstract
Periodic health checks of contouring errors under unloaded conditions are critical for machine performance evaluation and value-added manufacturing. Aiming at breaking the dimension, range and speed measurement limitations of the existing devices, a cost-effective knowledge-driven approach for detecting error motions of arbitrary paths [...] Read more.
Periodic health checks of contouring errors under unloaded conditions are critical for machine performance evaluation and value-added manufacturing. Aiming at breaking the dimension, range and speed measurement limitations of the existing devices, a cost-effective knowledge-driven approach for detecting error motions of arbitrary paths using a single camera is proposed. In combination with the PNP algorithm, the three-dimensional (3D) evaluation of large-scale contouring error in relatively high feed rate conditions can be deduced from a priori geometrical knowledge. The innovations of this paper focus on improving the accuracy, efficiency and ability of the vision measurement. Firstly, a camera calibration method considering distortion partition of the depth-of-field (DOF) is presented to give an accurate description of the distortion behavior in the entire photography domain. Then, to maximize the utilization of the decimal involved in the feature encoding, new high-efficient encoding markers are designed on a cooperative target to characterize motion information of the machine. Accordingly, in the image processing, markers are automatically identified and located by the proposed decoding method based on finding the optimal start bit. Finally, with the selected imaging parameters and the precalibrated position of each marker, the 3D measurement of large-scale contouring error under relatively high dynamic conditions can be realized by comparing the curve that is measured by PNP algorithm with the nominal one. Both detection and verification experiments are conducted for two types of paths (i.e., planar and spatial trajectory), and experimental results validate the measurement accuracy and advantages of the proposed method. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the measurement system.</p>
Full article ">Figure 2
<p>Imaging geometry for central perspective projection.</p>
Full article ">Figure 3
<p>Control field for distortion correction and camera calibration.</p>
Full article ">Figure 4
<p>Geometric relationship between the partition radiuses at different object planes.</p>
Full article ">Figure 5
<p>Means for motion description. (<b>a</b>) Traditional reflective markers. (<b>b</b>) Small-size artifact. (<b>c</b>) Large-size artifact with coded markers.</p>
Full article ">Figure 6
<p>Large-size cooperative target. (<b>a</b>) Artifact. (<b>b</b>) Cooperative target fixed in the workbench. (<b>c</b>) High SNR image of the acquired coded marker.</p>
Full article ">Figure 7
<p>3D geometric relationships between markers.</p>
Full article ">Figure 8
<p>A work flow for the automatic identification and location of coded markers.</p>
Full article ">Figure 9
<p>Arranging vectors clockwise beginning with the ‘start tag’.</p>
Full article ">Figure 10
<p>Reading binary sequence clockwise. (<b>a</b>) case 1. (<b>b</b>) case 2.</p>
Full article ">Figure 11
<p>Pseudo-code description of calculating the coded value.</p>
Full article ">Figure 12
<p>Schematic diagram of coded value calculation when <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mo>&lt;</mo> <mn>0</mn> </mrow> </semantics></math> or <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 13
<p>Schematic diagram of coded value calculation when <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mo>&gt;</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 14
<p>Accuracy comparison of the three algorithms without distortion partition. (<b>a</b>) Experimental setup. (<b>b</b>) Reprojection error of DLS code. (<b>c</b>) Reprojection error of LHM code. (<b>d</b>) Reprojection error of OPNP code.</p>
Full article ">Figure 14 Cont.
<p>Accuracy comparison of the three algorithms without distortion partition. (<b>a</b>) Experimental setup. (<b>b</b>) Reprojection error of DLS code. (<b>c</b>) Reprojection error of LHM code. (<b>d</b>) Reprojection error of OPNP code.</p>
Full article ">Figure 15
<p>Principle for 3D high temporal-spatial measurement using PNP algorithm.</p>
Full article ">Figure 16
<p>Experimental system and test configurations. (<b>a</b>) Experimental system. (<b>b</b>) Test configuration for butterfly curve. (<b>c</b>) Test configuration for spatial curve.</p>
Full article ">Figure 16 Cont.
<p>Experimental system and test configurations. (<b>a</b>) Experimental system. (<b>b</b>) Test configuration for butterfly curve. (<b>c</b>) Test configuration for spatial curve.</p>
Full article ">Figure 17
<p>3D grey map and grey gradient of the central point captured at 25 FPS with 5120 × 5120 pixels. (<b>a</b>) Results of a static image (ground truth). (<b>b</b>) Results at 3 m/min. (<b>c</b>) Results at 5 m/min. (<b>d</b>) Results at 7 m/min.</p>
Full article ">Figure 18
<p>3D grey map and grey gradient of the central point captured at 100 FPS with 3072 × 3072 pixels. (<b>a</b>) Results of a static image (ground truth). (<b>b</b>) Results at 3 m/min. (<b>c</b>) Results at 5 m/min. (<b>d</b>) Results at 7 m/min.</p>
Full article ">Figure 19
<p>3D grey map and grey gradient of the central point captured at 150 FPS with 1024 × 1024 pixels. (<b>a</b>) Results of a static image (ground truth). (<b>b</b>) Results at 3 m/min. (<b>c</b>) Results at 5 m/min. (<b>d</b>) Results at 7 m/min.</p>
Full article ">Figure 20
<p>Commanded paths in MCS. (<b>a</b>) Large-scale butterfly curve in MCS. (<b>b</b>) Commanded spatial curve in MCS.</p>
Full article ">Figure 21
<p>Distortion curves. (<b>a</b>) Experimental setup. (<b>b</b>) Extracted straight lines on the four subregions of the front plane image by equal-radius partition. (<b>c</b>) Distortion curves of the front plane calculated by minimizing straightness errors.</p>
Full article ">Figure 21 Cont.
<p>Distortion curves. (<b>a</b>) Experimental setup. (<b>b</b>) Extracted straight lines on the four subregions of the front plane image by equal-radius partition. (<b>c</b>) Distortion curves of the front plane calculated by minimizing straightness errors.</p>
Full article ">Figure 22
<p>Accuracy verification of the camera calibration method. (<b>a</b>) Reprojection error of OPNP algorithm with partition model. (<b>b</b>) Calibrated accuracy of the camera in each axis. (<b>c</b>) 3D accuracy verification results of the vision system.</p>
Full article ">Figure 23
<p>3D large-scale butterfly path expressed in MCS by data transferring (3 m/min).</p>
Full article ">Figure 24
<p>Verification results of vision system in solving contouring error of large-scale butterfly path (3 m/min). (<b>a</b>) 2D path detected by the two devices. (<b>b</b>) Contouring error obtained by the two devices. (<b>c</b>) Verification results of vision system.</p>
Full article ">Figure 25
<p>3D large-scale butterfly path expressed in MCS by data transferring (5 m/min).</p>
Full article ">Figure 26
<p>Verification results of vision system in solving contouring error of large-scale butterfly path (5 m/min). (<b>a</b>) 2D path detected by the two devices. (<b>b</b>) Contouring error obtained by the two devices. (<b>c</b>) Verification results of vision system.</p>
Full article ">Figure 27
<p>Test configuration of the cross-grid encoder.</p>
Full article ">Figure 28
<p>3D large-scale butterfly path expressed in MCS by data transferring (3 m/min).</p>
Full article ">Figure 29
<p>Distortion partitioning method. (<b>a</b>) Equal radius partition. (<b>b</b>) Equal distortion partition.</p>
Full article ">
16 pages, 7412 KiB  
Article
Monitoring Land Subsidence in Wuhan City (China) using the SBAS-InSAR Method with Radarsat-2 Imagery Data
by Yang Zhang, Yaolin Liu, Manqi Jin, Ying Jing, Yi Liu, Yanfang Liu, Wei Sun, Junqing Wei and Yiyun Chen
Sensors 2019, 19(3), 743; https://doi.org/10.3390/s19030743 - 12 Feb 2019
Cited by 80 | Viewed by 8324
Abstract
Wuhan city is the biggest city in central China and has suffered subsidence problems in recent years because of its rapid urban construction. However, longtime and wide range monitoring of land subsidence is lacking. The causes of subsidence also require further study, such [...] Read more.
Wuhan city is the biggest city in central China and has suffered subsidence problems in recent years because of its rapid urban construction. However, longtime and wide range monitoring of land subsidence is lacking. The causes of subsidence also require further study, such as natural conditions and human activities. We use small baseline subset (SBAS) interferometric synthetic aperture radar (InSAR) method and high-resolution RADARSAT-2 images acquired between 2015 and 2018 to derive subsidence. The SBAS-InSAR results are validated by 56 leveling benchmarks where two readings of elevation were recorded. Two natural factors (carbonate rock and soft soils) and three human factors (groundwater exploitation, subway excavation and urban construction) are investigated for their relationships with land subsidence. Results show that four major areas of subsidence are detected and the subsidence rate varies from −51.56 to 27.80 millimeters per year (mm/yr) with an average of −0.03 mm/yr. More than 83.81% of persistent scattered (PS) points obtain a standard deviation of less than −6 mm/yr, and the difference between SBAS-InSAR method and leveling data is less than 5 mm/yr. Thus, we conclude that SBAS-InSAR method with Radarsat-2 data is reliable for longtime monitoring of land subsidence covering a large area in Wuhan city. In addition, land subsidence is caused by a combination of natural conditions and human activities. Natural conditions provide a basis for subsidence and make subsidence possible. Human activities are driving factors and make subsidence happen. Moreover, subsidence information could be used in disaster prevention, urban planning, and hydrological modeling. Full article
(This article belongs to the Special Issue Synthetic Aperture Radar (SAR) Techniques and Applications)
Show Figures

Figure 1

Figure 1
<p>The location of Wuhan city in China and the study area. The red rectangle illustrates the coverage of Radarsat-2. B1–B6 represent six carbonate rock belts aligned in an East-West orientation, namely Tianxingzhou, Daqiao, Baishazhou, Zhuankou, Junshan, and Hannan.</p>
Full article ">Figure 2
<p>Flowchart of SBAS-InSAR data processing.</p>
Full article ">Figure 3
<p>(<b>a</b>) Time–position of Radarsat-2 image interferometric pairs and (<b>b</b>) time–baseline of Radarsat-2 image interferometric pairs. The yellow diamond denotes the super master image. Blue lines represent interferometric pairs. Green diamonds denote slave images.</p>
Full article ">Figure 4
<p>The average subsidence velocity in LOS from October 2015 to June 2018 across Wuhan city by using SBAS-InSAR technique. The four black rectangles are the four major areas of subsidence. A-E are five points of subsidence, detailed in <a href="#sensors-19-00743-f006" class="html-fig">Figure 6</a>.</p>
Full article ">Figure 5
<p>Spatio-temporal evolution of accumulated subsidence in Wuhan city derived from Radarsat-2 images. Only 6 of the 20 subsidence maps are shown.</p>
Full article ">Figure 6
<p>Time-series subsidence at the five typical points A–E. The gray rectangle denotes the early summer (May, June, and July).</p>
Full article ">Figure 7
<p>Leveling data versus SBAS-InSAR method plots of land subsidence.</p>
Full article ">Figure 8
<p>(<b>a</b>) Relationship between soft soil thickness and subsidence rate. (<b>b</b>) The subsidence rate of areas located on carbonate rock belts and those of the whole of the two urban areas.</p>
Full article ">Figure 9
<p>Map of the GERs and Metro Networks of Wuhan city.</p>
Full article ">Figure 10
<p>Maps show subsidence rate in Region 1 (<b>a</b>), and a subsidence profile passing through stations A and B (<b>b</b>).</p>
Full article ">Figure 11
<p>Maps show subsidence rate in Region 2 (<b>a</b>), and time-series subsidence at the four points H-K (<b>b</b>).</p>
Full article ">Figure 12
<p>Maps show the satellite images of Region 2 on 21 January 2015 (<b>a</b>) and 9 December 2017 (<b>b</b>).</p>
Full article ">Figure 13
<p>The correlation between subsidence rate and impervious surface fraction.</p>
Full article ">
13 pages, 4350 KiB  
Article
Finer SHM-Coverage of Inter-Plies and Bondings in Smart Composite by Dual Sinusoidal Placed Distributed Optical Fiber Sensors
by Venkadesh Raman, Monssef Drissi-Habti, Preshit Limje and Aghiad Khadour
Sensors 2019, 19(3), 742; https://doi.org/10.3390/s19030742 - 12 Feb 2019
Cited by 20 | Viewed by 4380
Abstract
Designing of new generation offshore wind turbine blades is a great challenge as size of blades are getting larger (typically larger than 100 m). Structural Health Monitoring (SHM), which uses embedded Fiber Optics Sensors (FOSs), is incorporated in critical stressed zones such as [...] Read more.
Designing of new generation offshore wind turbine blades is a great challenge as size of blades are getting larger (typically larger than 100 m). Structural Health Monitoring (SHM), which uses embedded Fiber Optics Sensors (FOSs), is incorporated in critical stressed zones such as trailing edges and spar webs. When FOS are embedded within composites, a ‘penny shape’ region of resin concentration is formed around the section of FOS. The size of so-formed defects are depending on diameter of the FOS. Penny shape defects depend of FOS diameter. Consequently, care must be given to embed in composites reliable sensors that are as small as possible. The way of FOS placement within composite plies is the second critical issue. Previous research work done in this field (1) investigated multiple linear FOS and sinusoidal FOS placement, as well. The authors pointed out that better structural coverage of the critical zones needs some new concepts. Therefore, further advancement is proposed in the current article with novel FOS placement (anti-phasic sinusoidal FOS placement), so as to cover more critical area and sense multi-directional strains, when the wind blade is in-use. The efficiency of the new positioning is proven by numerical and experimental study. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Wind turbine blades - (<b>a</b>) Adhesive bonding zones in blade - Critical zones of failure in blades, (<b>b</b>) Debonding zone observed at the trailing edge [<a href="#B1-sensors-19-00742" class="html-bibr">1</a>].</p>
Full article ">Figure 2
<p>Different resin eye concentrations depending on the diameter and their effect—(<b>a</b>) Resin eye concentration around FOS position in composite, (<b>b</b>) Resin eye concentration around various diameter FOSs.</p>
Full article ">Figure 3
<p>Multiple FOSs embedded in linear alignment showing variables to optimise.</p>
Full article ">Figure 4
<p>Sinusoidal FOS Alignment Showing 3 variables to optimise.</p>
Full article ">Figure 5
<p>Dual-Sinusoidal FOS Alignment.</p>
Full article ">Figure 6
<p>3D and zoomed 2D view of model having dual-sinusoidal optical fiber alignment.</p>
Full article ">Figure 7
<p>Loads and boundary conditions properties used for model having dual-sinusoidal optical fiber placement.</p>
Full article ">Figure 8
<p>Dual-sinusoidal FOSs’ placement embedded on a glass composite specimen.</p>
Full article ">Figure 9
<p>Stress distribution in the numerical model simulation having three FOSs in a linear placement.</p>
Full article ">Figure 10
<p>Stress distribution in the numerical model simulation having two FOSs in a linear placement.</p>
Full article ">Figure 11
<p>Stress distribution in the numerical model simulation with two FOSs in a linear placement with change in spacing.</p>
Full article ">Figure 12
<p>Comparison between three different multi-linear FOSs placements.</p>
Full article ">Figure 13
<p>Strain measurements showing dual-sinusoidal FOSs placement (placement in dual-sinusoidal mode, in phase opposition, provides coverage complementary to that of a single sinusoidal fiber).</p>
Full article ">Figure 14
<p>Comparison between strain parameter sensed with dual sinusoidal alignment.</p>
Full article ">Figure 15
<p>Bending strain measurement carried out with dual-sinusoidal FOSs alignment on glass-fiber composite specimen.</p>
Full article ">
17 pages, 2723 KiB  
Article
Tracking and Estimation of Multiple Cross-Over Targets in Clutter
by Sufyan Ali Memon, Myungun Kim and Hungsun Son
Sensors 2019, 19(3), 741; https://doi.org/10.3390/s19030741 - 12 Feb 2019
Cited by 11 | Viewed by 3806
Abstract
Tracking problems, including unknown number of targets, target trajectories behaviour and uncertain motion of targets in the surveillance region, are challenging issues. It is also difficult to estimate cross-over targets in heavy clutter density environment. In addition, tracking algorithms including smoothers which use [...] Read more.
Tracking problems, including unknown number of targets, target trajectories behaviour and uncertain motion of targets in the surveillance region, are challenging issues. It is also difficult to estimate cross-over targets in heavy clutter density environment. In addition, tracking algorithms including smoothers which use measurements from upcoming scans to estimate the targets are often unsuccessful in tracking due to low detection probabilities. For efficient and better tracking performance, the smoother must rely on backward tracking to fetch measurement from future scans to estimate forward track in the current time. This novel idea is utilized in the joint integrated track splitting (JITS) filter to develop a new fixed-interval smoothing JITS (FIsJITS) algorithm for tracking multiple cross-over targets. The FIsJITS initializes tracks employing JITS in two-way directions: Forward-time moving JITS (fJITS) and backward-time moving JITS (bJITS). The fJITS acquires the bJITS predictions when they arrive from future scans to the current scan for smoothing. As a result, the smoothing multi-target data association probabilities are obtained for computing the fJITS and smoothing output estimates. This significantly improves estimation accuracy for multiple cross-over targets in heavy clutter. To verify this, numerical assessments of the FIsJITS are tested and compared with existing algorithms using simulations. Full article
Show Figures

Figure 1

Figure 1
<p>Block Diagram of Feedback-loop tracking for one interval.</p>
Full article ">Figure 2
<p>Formation of clusters.</p>
Full article ">Figure 3
<p>Fusion of a fJITS prediction with bJITS predictions.</p>
Full article ">Figure 4
<p>Component propagations and merging in forward-path track.</p>
Full article ">Figure 5
<p>Overlapped smoothing intervals.</p>
Full article ">Figure 6
<p>Flow-chart of FIsJITS.</p>
Full article ">Figure 7
<p>Surveillance scenarios in Clutter (<b>a</b>) Three cross-over targets (<b>b</b>) Five cross-over targets (<b>c</b>) Three cross-over targets (Zoom-in) (<b>d</b>) Five cross-over targets (Zoom-in).</p>
Full article ">Figure 8
<p>Three cross-over targets (<b>a</b>) Number of CTTs (<b>b</b>) RMSE of target 1.</p>
Full article ">Figure 9
<p>Five cross-over targets (<b>a</b>) Number of CTTs (<b>b</b>) RMSE of target 5.</p>
Full article ">
23 pages, 4320 KiB  
Article
Joint Optimization for Task Offloading in Edge Computing: An Evolutionary Game Approach
by Chongwu Dong and Wushao Wen
Sensors 2019, 19(3), 740; https://doi.org/10.3390/s19030740 - 12 Feb 2019
Cited by 43 | Viewed by 7910
Abstract
The mobile edge computing (MEC) paradigm provides a promising solution to solve the resource-insufficiency problem in mobile terminals by offloading computation-intensive and delay-sensitive tasks to nearby edge nodes. However, limited computation resources in edge nodes may not be sufficient to serve excessive offloading [...] Read more.
The mobile edge computing (MEC) paradigm provides a promising solution to solve the resource-insufficiency problem in mobile terminals by offloading computation-intensive and delay-sensitive tasks to nearby edge nodes. However, limited computation resources in edge nodes may not be sufficient to serve excessive offloading tasks exceeding the computation capacities of edge nodes. Therefore, multiple edge clouds with a complementary central cloud coordinated to serve users is the efficient architecture to satisfy users’ Quality-of-Service (QoS) requirements while trying to minimize some network service providers’ cost. We study a dynamic, decentralized resource-allocation strategy based on evolutionary game theory to deal with task offloading to multiple heterogeneous edge nodes and central clouds among multi-users. In our strategy, the resource competition among multi-users is modeled by the process of replicator dynamics. During the process, our strategy can achieve one evolutionary equilibrium, meeting users’ QoS requirements under resource constraints of edge nodes. The stability and fairness of this strategy is also proved by mathematical analysis. Illustrative studies show the effectiveness of our proposed strategy, outperforming other alternative methods. Full article
(This article belongs to the Special Issue Recent Advances in Fog/Edge Computing in Internet of Things)
Show Figures

Figure 1

Figure 1
<p>An overview of a mobile edge computing (MEC) system.</p>
Full article ">Figure 2
<p>Task offloading and resource allocation in the MEC cloud platform.</p>
Full article ">Figure 3
<p>Cumulative operational costs of five strategies.</p>
Full article ">Figure 4
<p>Task-completion time of four strategies.</p>
Full article ">Figure 5
<p>Cumulative user utility in different regions.</p>
Full article ">Figure 6
<p>Dynamic Cloud Selection for Mobile Users in the Same Region.</p>
Full article ">Figure 7
<p>Different types of task completion time in the same region.</p>
Full article ">Figure 7 Cont.
<p>Different types of task completion time in the same region.</p>
Full article ">Figure 8
<p>The normalized load of clouds along with simulation time.</p>
Full article ">Figure 9
<p>Dynamic VM Assignment for Different Types of Task in the Same Region.</p>
Full article ">Figure 10
<p>The convergence speed incurred by information exchange delay.</p>
Full article ">Figure 10 Cont.
<p>The convergence speed incurred by information exchange delay.</p>
Full article ">
17 pages, 13689 KiB  
Article
Accurate and Cost-Effective Micro Sun Sensor based on CMOS Black Sun Effect
by Rashid Saleem and Sukhan Lee
Sensors 2019, 19(3), 739; https://doi.org/10.3390/s19030739 - 12 Feb 2019
Cited by 9 | Viewed by 7031
Abstract
An accurate and cost-effective micro sun sensor based on the extraction of the sun vector using a phenomenon called the “black sun” is presented. Unlike conventional image-based sun sensors where there is difficulty in accurately detecting the sun center, the black sun effect [...] Read more.
An accurate and cost-effective micro sun sensor based on the extraction of the sun vector using a phenomenon called the “black sun” is presented. Unlike conventional image-based sun sensors where there is difficulty in accurately detecting the sun center, the black sun effect allows the sun center to be accurately extracted even with the sun image appearing irregular and noisy due to glare. This allows the proposed micro sun sensor to achieve high accuracy even when a 1 mm × 1 mm CMOS image sensor with a resolution of 250 × 250 pixels is used. The proposed micro sun sensor is implemented in two application modes: (1) a stationary mode targeted at tracking the sun for heliostats or solar panels with a fixed pose of single image sensor of 1 mm × 1 mm × 1.74 mm in size and (2) a non-stationary mode targeted at determining the orientation of moving platforms with six sensors on the platform, which is configured in an icosahedron geometry of 23 mm × 23 mm × 12 mm in size. For the stationary mode, we obtained an accuracy of 0.013° by applying Kalman filter to the sun sensor measurement for a particular sensor orientation. For the non-stationary mode, we obtained an improved accuracy of 0.05° by fusing the measurements from three sun sensors available at any instant of time. Furthermore, experiments indicate that the black sun effect makes the precision of sun vector extraction independent of the sun location captured on the image plane. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p><b>Left</b>: Sun captured by a NanEye camera; <b>Right</b>: Oversaturation in a CMOS Image Sensor causes electron overspill that increases the reference voltage, resulting in the output signal being “near zero”.</p>
Full article ">Figure 2
<p>Awaiba NanEye 2D camera module compared with a matchstick for size (Source: CMOSIS, 2015).</p>
Full article ">Figure 3
<p>Visibility of the black sun with different camera parameters.</p>
Full article ">Figure 4
<p>From left to right, a gradual step of decrementing the intensity by using the centroid detection algorithm to refine the corner points in each iteration is seen. Each row represents an iteration, if the loop size is 4 then (<b>a</b>) the pixel intensity is greater than the threshold +<math display="inline"><semantics> <mrow> <mn>4</mn> </mrow> </semantics></math>; (<b>b</b>) the pixel intensity is greater than the threshold + 3; (<b>c</b>) the pixel intensity is greater than the threshold + 2; (<b>d</b>) the pixel intensity is greater than the threshold + 1.</p>
Full article ">Figure 5
<p>The final stage of the centroid detection algorithm after all iterations: (<b>a</b>) all potential black sun candidates; (<b>b</b>) points surviving between iterations; (<b>c</b>) black sun centroid after determining the point with the largest radius.</p>
Full article ">Figure 6
<p>(<b>a</b>) Circle Hough transform (CHT) failing to detect the black sun; (<b>b</b>) the proposed method; (<b>c</b>) binary segmented image of the sun showing an irregular shape due to glare.</p>
Full article ">Figure 7
<p>(<b>a</b>) CHT detecting multiple circles; (<b>b</b>) the proposed method; (<b>c</b>) binary segmented image of the sun capture with the black sun with glare.</p>
Full article ">Figure 8
<p>Conversion of pixel coordinates to sun vector representation.</p>
Full article ">Figure 9
<p>Stationary application process flow.</p>
Full article ">Figure 10
<p>Effects of filtering of a single camera; <b>Top</b>: Elevation; <b>Bottom</b>: Azimuth.</p>
Full article ">Figure 11
<p>Non-Stationary application process flow.</p>
Full article ">Figure 12
<p>Multiple-image-sensor icosahedron configuration for the sun sensor design with a three-image sensor capable of capturing the sun simultaneously at any given time.</p>
Full article ">Figure 13
<p>(<b>a</b>) and (<b>b</b>) CAD layouts of the first and second prototypes; (<b>c</b>) and (<b>d</b>) 3D printed sun sensor module with a coin for size comparison of the first and second prototypes.</p>
Full article ">Figure 14
<p>CAD layout of the aluminum metal design.</p>
Full article ">Figure 15
<p>Aluminum metal design (<b>left</b>) alongside the second prototype (<b>right</b>).</p>
Full article ">Figure 16
<p>(<b>a</b>) Camera 1 variances observed by elevation and azimuth during experimentation subdivided at 1-minute intervals; (<b>b</b>) image frames showing the transition of the position of sun image captured on image plane from beginning to end of experimentation.</p>
Full article ">Figure 17
<p>(<b>a</b>) Camera 2 variances observed by elevation and azimuth during experimentation subdivided at 1-minute intervals; (<b>b</b>) image frames showing the transition of the position of sun image captured on image plane from beginning to the end of experimentation.</p>
Full article ">Figure 18
<p>(<b>a</b>) Camera 3 variances observed by elevation and azimuth during experimentation subdivided at 1 min intervals; (<b>b</b>) image frames showing the transition of the position of sun image captured on image plane from beginning to the end of experimentation.</p>
Full article ">Figure 19
<p>Performance observed in the stationary application by a single camera configuration; <b>Top</b>: Elevation; <b>Bottom</b>: Azimuth.</p>
Full article ">Figure 20
<p>Images captured simultaneously by three cameras in the icosahedron configuration; (<b>a</b>) camera 1; (<b>b</b>) camera 2; (<b>c</b>) camera 3.</p>
Full article ">Figure 21
<p>Images captured simultaneously under dynamic cloud condition by three cameras in the icosahedron configuration; (<b>a</b>) camera 1; (<b>b</b>) camera 2; (<b>c</b>) camera 3.</p>
Full article ">Figure 22
<p>Three-camera sun vector reading with the fused vector in a non-stationary application; <b>Top</b>: Elevation; <b>Bottom</b>: Azimuth.</p>
Full article ">Figure 23
<p>Error distribution seen by the multiple-camera configuration in the non-stationary application.</p>
Full article ">
14 pages, 8465 KiB  
Article
Natural Fibre-Reinforced Polymer Composites (NFRP) Fabricated from Lignocellulosic Fibres for Future Sustainable Architectural Applications, Case Studies: Segmented-Shell Construction, Acoustic Panels, and Furniture
by Hanaa Dahy
Sensors 2019, 19(3), 738; https://doi.org/10.3390/s19030738 - 12 Feb 2019
Cited by 67 | Viewed by 12958
Abstract
Due to the high amounts of waste generated from the building industry field, it has become essential to search for renewable building materials to be applied in wider and more innovative methods in architecture. One of the materials with the highest potential in [...] Read more.
Due to the high amounts of waste generated from the building industry field, it has become essential to search for renewable building materials to be applied in wider and more innovative methods in architecture. One of the materials with the highest potential in this area is natural fibre-reinforced polymers (NFRP), which are also called biocomposites, and are filled or reinforced with annually renewable lignocellulosic fibres. This would permit variable closed material cycles’ scenarios and should decrease the amounts of waste generated in the building industry. Throughout this paper, this discussion will be illustrated through a number of developments and 1:1 mockups fabricated from newly developed lignocellulosic-based biocomposites from both bio-based and non-bio-based thermoplastic and thermoset polymers. Recyclability, closed materials cycles, and design variations with diverse digital fabrication technologies will be discussed in each case. The mock-ups’ concepts, materials’ compositions, and fabrication methods are illustrated. In the first case study, a structural segmented shell construction is developed and constructed. In the second case study, acoustic panels were developed. The final case studies are two types of furniture, where each is developed from a different lignocellulosic-based biocomposite. All of the presented case studies show diverse architectural design possibilities, structural abilities, and physical building characteristics. Full article
(This article belongs to the Special Issue Advances in FRP Composites: Applications, Sensing, and Monitoring)
Show Figures

Figure 1

Figure 1
<p>Up: (<b>1</b>) Compounding, (<b>2</b>) pelletising, and (<b>3</b>) extrusion processes. Down: (<b>4</b>) Cut panel, (<b>5</b>) the thermoforming-process of a thermoplastic biocomposite panel, and (<b>6</b>) a thermoformed final finished panel (photos: Dahy, H.; republished [<a href="#B8-sensors-19-00738" class="html-bibr">8</a>]).</p>
Full article ">Figure 2
<p>(<b>a</b>) The erected BioMat pavilion at the campus of University of Stuttgart; (<b>b</b>) Connections of modular biocomposite sandwich panels in combination with the structural beams. © BioMat/ ITKE-University of Stuttgart.</p>
Full article ">Figure 3
<p>(<b>a</b>) Vacuum-pressing fabrication technique applied in the production of the sandwich panels using the closed molding technique; (<b>b</b>) Isometric view of single layers and connections of one of the four details of the applied modular system. © BioMat/ITKE-University of Stuttgart.</p>
Full article ">Figure 4
<p>Ignot bio- acoustic sandwich panel (<b>a</b>) bio-acoustic sandwich panel before acoustic measurement (<b>b</b>) bio-acoustic sandwich panel during the acoustic absorption test with closed edges; (<b>c</b>): Knitting/tailor process of the lignocellulosic biocomposite layers before being fixed on the biofoam core, Photo: Subaciute and Stankaityte, BioMat-ITKE/University of Stuttgart; (<b>d</b>) Compilation of the measured sound absorption grades dependent on the frequency, as well as the evaluated sound absorption grades of the Ignot sandwich elements from variants zero to one.</p>
Full article ">Figure 4 Cont.
<p>Ignot bio- acoustic sandwich panel (<b>a</b>) bio-acoustic sandwich panel before acoustic measurement (<b>b</b>) bio-acoustic sandwich panel during the acoustic absorption test with closed edges; (<b>c</b>): Knitting/tailor process of the lignocellulosic biocomposite layers before being fixed on the biofoam core, Photo: Subaciute and Stankaityte, BioMat-ITKE/University of Stuttgart; (<b>d</b>) Compilation of the measured sound absorption grades dependent on the frequency, as well as the evaluated sound absorption grades of the Ignot sandwich elements from variants zero to one.</p>
Full article ">Figure 5
<p>(<b>a</b>–<b>d</b>): Illustration of the fabrication methods and the acoustic absorption graph of the <span class="html-italic">Polycal</span> acoustic panel; (<b>e</b>): Diverse tests that took place to specify the processing temperature of thermoforming techniques. Photo credit: Banaditsch, Foroutan &amp; Moradian, BioMat-ITKE/University of Stuttgart.</p>
Full article ">Figure 5 Cont.
<p>(<b>a</b>–<b>d</b>): Illustration of the fabrication methods and the acoustic absorption graph of the <span class="html-italic">Polycal</span> acoustic panel; (<b>e</b>): Diverse tests that took place to specify the processing temperature of thermoforming techniques. Photo credit: Banaditsch, Foroutan &amp; Moradian, BioMat-ITKE/University of Stuttgart.</p>
Full article ">Figure 6
<p>Illustration of the production of Bio-flexi in the mass-production scale indicating the control and feeding units in Naftex GmbH company, Wiesmoor, Germany (Photo: Dahy, H. republished [<a href="#B7-sensors-19-00738" class="html-bibr">7</a>]).</p>
Full article ">Figure 7
<p>The lounge exhibited in the foyer of the faculty of architecture in the University of Stuttgart, Germany. Photo © BioMat at ITKE, University of Stuttgart.</p>
Full article ">Figure 8
<p>The hemp chair during production, with an emphasis on the winding process applied for the lignocellulosic endless fibres that are pre-soaked in the resin bath. Photo credit: Sachin and Kauffmann, BioMat-ITKE/University of Stuttgart.</p>
Full article ">Figure 9
<p>The final hemp chair with a closer illustration of the detailing of the connection. Photo credit: Sachin and Kauffmann, BioMat-ITKE/University of Stuttgart.</p>
Full article ">
20 pages, 5070 KiB  
Article
A Non-Invasive Medical Device for Parkinson’s Patients with Episodes of Freezing of Gait
by Catalina Punin, Boris Barzallo, Roger Clotet, Alexander Bermeo, Marco Bravo, Juan Pablo Bermeo and Carlos Llumiguano
Sensors 2019, 19(3), 737; https://doi.org/10.3390/s19030737 - 12 Feb 2019
Cited by 31 | Viewed by 7724
Abstract
A critical symptom of Parkinson’s disease (PD) is the occurrence of Freezing of Gait (FOG), an episodic disorder that causes frequent falls and consequential injuries in PD patients. There are various auditory, visual, tactile, and other types of stimulation interventions that can be [...] Read more.
A critical symptom of Parkinson’s disease (PD) is the occurrence of Freezing of Gait (FOG), an episodic disorder that causes frequent falls and consequential injuries in PD patients. There are various auditory, visual, tactile, and other types of stimulation interventions that can be used to induce PD patients to escape FOG episodes. In this article, we describe a low cost wearable system for non-invasive gait monitoring and external delivery of superficial vibratory stimulation to the lower extremities triggered by FOG episodes. The intended purpose is to reduce the duration of the FOG episode, thus allowing prompt resumption of gait to prevent major injuries. The system, based on an Android mobile application, uses a tri-axial accelerometer device for gait data acquisition. Gathered data is processed via a discrete wavelet transform-based algorithm that precisely detects FOG episodes in real time. Detection activates external vibratory stimulation of the legs to reduce FOG time. The integration of detection and stimulation in one low cost device is the chief novel contribution of this work. We present analyses of sensitivity, specificity and effectiveness of the proposed system to validate its usefulness. Full article
Show Figures

Figure 1

Figure 1
<p>How Parkinson’s disease originates [<a href="#B25-sensors-19-00737" class="html-bibr">25</a>].</p>
Full article ">Figure 2
<p>Decomposition wavelet in five levels: tree topology.</p>
Full article ">Figure 3
<p>Acceleration in patients 5 and 6, performed in the lower extremities.</p>
Full article ">Figure 4
<p>(<b>a</b>) Printed Circuit Board with components; (<b>b</b>) Encapsulated device in ergonomic support; (<b>c</b>) Device placed in patient.</p>
Full article ">Figure 5
<p>Graphical interface of application “FOG Detection” (<b>a</b>) home screen; (<b>b</b>) results screen/stimulus OFF; (<b>c</b>) results screen/stimulus ON.</p>
Full article ">Figure 6
<p>Signals of patients with FOG.</p>
Full article ">Figure 7
<p>Signals of patients without FOG.</p>
Full article ">Figure 8
<p>Distribution of total energy in the subbands of the wavelet coefficients.</p>
Full article ">Figure 9
<p>Percentage distribution of energy in the subbands of the coefficients.</p>
Full article ">
23 pages, 1174 KiB  
Article
NOMA-Assisted Multiple Access Scheme for IoT Deployment: Relay Selection Model and Secrecy Performance Improvement
by Dinh-Thuan Do, Minh-Sang Van Nguyen, Thi-Anh Hoang and Miroslav Voznak
Sensors 2019, 19(3), 736; https://doi.org/10.3390/s19030736 - 12 Feb 2019
Cited by 62 | Viewed by 6640
Abstract
In this paper, an Internet-of-Things (IoT) system containing a relay selection is studied as employing an emerging multiple access scheme, namely non-orthogonal multiple access (NOMA). This paper proposes a new scheme to consider secure performance, to be called relay selection NOMA (RS-NOMA). In [...] Read more.
In this paper, an Internet-of-Things (IoT) system containing a relay selection is studied as employing an emerging multiple access scheme, namely non-orthogonal multiple access (NOMA). This paper proposes a new scheme to consider secure performance, to be called relay selection NOMA (RS-NOMA). In particular, we consider metrics to evaluate secure performance in such an RS-NOMA system where a base station (master node in IoT) sends confidential messages to two main sensors (so-called NOMA users) under the influence of an external eavesdropper. In the proposed IoT scheme, both two NOMA sensors and an illegal sensor are served with different levels of allocated power at the base station. It is noticed that such RS-NOMA operates in two hop transmission of the relaying system. We formulate the closed-form expressions of secure outage probability (SOP) and the strictly positive secure capacity (SPSC) to examine the secrecy performance under controlling setting parameters such as transmit signal-to-noise ratio (SNR), the number of selected relays, channel gains, and threshold rates. The different performance is illustrated as performing comparisons between NOMA and orthogonal multiple access (OMA). Finally, the advantage of NOMA in secure performance over orthogonal multiple access (OMA) is confirmed both analytically and numerically. Full article
(This article belongs to the Special Issue Internet of Things and Machine-to-Machine Communication)
Show Figures

Figure 1

Figure 1
<p>System model of a RS-NOMA assisted IoT system in the existence of an external eavesdropper.</p>
Full article ">Figure 2
<p>Comparison study on SOP of NOMA and OMA for User D1 versus <math display="inline"><semantics> <mrow> <msub> <mi>ρ</mi> <mi>S</mi> </msub> <mo>=</mo> <msub> <mi>ρ</mi> <mi>R</mi> </msub> </mrow> </semantics></math> as changing <math display="inline"><semantics> <msub> <mi>R</mi> <mn>1</mn> </msub> </semantics></math> (<math display="inline"><semantics> <mrow> <msub> <mrow> <msub> <mi>λ</mi> <mi>D</mi> </msub> </mrow> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>λ</mi> <mi>E</mi> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>ρ</mi> <mi>E</mi> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> dB, <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>).</p>
Full article ">Figure 3
<p>Comparison study on SOP of NOMA and OMA for User D2 versus <math display="inline"><semantics> <mrow> <msub> <mi>ρ</mi> <mi>S</mi> </msub> <mo>=</mo> <msub> <mi>ρ</mi> <mi>R</mi> </msub> </mrow> </semantics></math> as changing <span class="html-italic">K</span> (<math display="inline"><semantics> <mrow> <msub> <mrow> <msub> <mi>λ</mi> <mi>D</mi> </msub> </mrow> <mn>1</mn> </msub> <mo>=</mo> <msub> <mrow> <msub> <mi>λ</mi> <mi>D</mi> </msub> </mrow> <mn>12</mn> </msub> <mo>=</mo> <msub> <mi>λ</mi> <mrow> <mi>S</mi> <mi>R</mi> <mi>k</mi> </mrow> </msub> <mo>=</mo> <msub> <mrow> <msub> <mi>λ</mi> <mrow> <mi>k</mi> <mi>D</mi> </mrow> </msub> </mrow> <mn>2</mn> </msub> <mo>=</mo> <msub> <mi>λ</mi> <mi>E</mi> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>ρ</mi> <mi>E</mi> </msub> <mo>=</mo> <mo>−</mo> <mn>10</mn> </mrow> </semantics></math> dB, <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>).</p>
Full article ">Figure 4
<p>Comparison study on SOP of NOMA and OMA for User D1 versus transmit <math display="inline"><semantics> <mrow> <msub> <mi>ρ</mi> <mi>S</mi> </msub> <mo>=</mo> <msub> <mi>ρ</mi> <mi>R</mi> </msub> </mrow> </semantics></math> as varying <math display="inline"><semantics> <msub> <mi>λ</mi> <mrow> <mi>D</mi> <mn>1</mn> </mrow> </msub> </semantics></math>.</p>
Full article ">Figure 5
<p>SOP of NOMA and OMA for User D2 versus <math display="inline"><semantics> <mrow> <msub> <mi>ρ</mi> <mi>S</mi> </msub> <mo>=</mo> <msub> <mi>ρ</mi> <mi>R</mi> </msub> </mrow> </semantics></math> as varying <math display="inline"><semantics> <msub> <mi>R</mi> <mn>2</mn> </msub> </semantics></math>.</p>
Full article ">Figure 6
<p>Comparison study of SOP for NOMA and OMA for User D1 versus <math display="inline"><semantics> <mrow> <msub> <mi>ρ</mi> <mi>S</mi> </msub> <mo>=</mo> <msub> <mi>ρ</mi> <mi>R</mi> </msub> </mrow> </semantics></math> as varying <math display="inline"><semantics> <msub> <mi>ρ</mi> <mi>E</mi> </msub> </semantics></math>.</p>
Full article ">Figure 7
<p>Comparison study of SOP for NOMA and OMA for User D2 versus <math display="inline"><semantics> <mrow> <msub> <mi>ρ</mi> <mi>S</mi> </msub> <mo>=</mo> <msub> <mi>ρ</mi> <mi>R</mi> </msub> </mrow> </semantics></math> as varying <math display="inline"><semantics> <msub> <mi>ρ</mi> <mi>E</mi> </msub> </semantics></math>.</p>
Full article ">Figure 8
<p>Comparison study of SOP in several cases versus <math display="inline"><semantics> <mrow> <msub> <mi>ρ</mi> <mi>S</mi> </msub> <mo>=</mo> <msub> <mi>ρ</mi> <mi>R</mi> </msub> </mrow> </semantics></math> (<math display="inline"><semantics> <mrow> <msub> <mrow> <msub> <mi>λ</mi> <mi>D</mi> </msub> </mrow> <mn>1</mn> </msub> <mo>=</mo> <msub> <mrow> <msub> <mi>λ</mi> <mi>D</mi> </msub> </mrow> <mn>12</mn> </msub> <mo>=</mo> <msub> <mi>λ</mi> <mrow> <mi>S</mi> <mi>R</mi> <mi>k</mi> </mrow> </msub> <mo>=</mo> <msub> <mrow> <msub> <mi>λ</mi> <mrow> <mi>k</mi> <mi>D</mi> </mrow> </msub> </mrow> <mn>2</mn> </msub> <mo>=</mo> <msub> <mi>λ</mi> <mi>E</mi> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>ρ</mi> <mi>E</mi> </msub> <mo>=</mo> <mo>−</mo> <mn>8</mn> </mrow> </semantics></math> dB, <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>R</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>).</p>
Full article ">Figure 9
<p>Optimal SOP in several cases with indication of optimal value regarding <math display="inline"><semantics> <msub> <mi>α</mi> <mn>1</mn> </msub> </semantics></math> (<math display="inline"><semantics> <mrow> <msub> <mrow> <msub> <mi>λ</mi> <mi>D</mi> </msub> </mrow> <mn>1</mn> </msub> <mo>=</mo> <msub> <mrow> <msub> <mi>λ</mi> <mi>D</mi> </msub> </mrow> <mn>12</mn> </msub> <mo>=</mo> <msub> <mi>λ</mi> <mrow> <mi>S</mi> <mi>R</mi> <mi>k</mi> </mrow> </msub> <mo>=</mo> <msub> <mrow> <msub> <mi>λ</mi> <mrow> <mi>k</mi> <mi>D</mi> </mrow> </msub> </mrow> <mn>2</mn> </msub> <mo>=</mo> <msub> <mi>λ</mi> <mi>E</mi> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>ρ</mi> <mi>E</mi> </msub> <mo>=</mo> <mo>−</mo> <mn>5</mn> </mrow> </semantics></math> dB, <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>R</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>).</p>
Full article ">Figure 10
<p>Comparison study of SPSC in several cases versus <math display="inline"><semantics> <mrow> <msub> <mi>ρ</mi> <mi>S</mi> </msub> <mo>=</mo> <msub> <mi>ρ</mi> <mi>R</mi> </msub> </mrow> </semantics></math> at D1 as setting different values of <math display="inline"><semantics> <msub> <mi>ρ</mi> <mi>E</mi> </msub> </semantics></math> (<math display="inline"><semantics> <mrow> <msub> <mrow> <msub> <mi>λ</mi> <mi>D</mi> </msub> </mrow> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>λ</mi> <mi>E</mi> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>).</p>
Full article ">Figure 11
<p>SPSC performance in several cases versus <math display="inline"><semantics> <mrow> <msub> <mi>ρ</mi> <mi>S</mi> </msub> <mo>=</mo> <msub> <mi>ρ</mi> <mi>R</mi> </msub> </mrow> </semantics></math> as different choices of <math display="inline"><semantics> <msub> <mi>ρ</mi> <mi>E</mi> </msub> </semantics></math> (<math display="inline"><semantics> <mrow> <msub> <mrow> <msub> <mi>λ</mi> <mi>D</mi> </msub> </mrow> <mn>1</mn> </msub> <mo>=</mo> <msub> <mrow> <msub> <mi>λ</mi> <mi>D</mi> </msub> </mrow> <mn>12</mn> </msub> <mo>=</mo> <msub> <mi>λ</mi> <mrow> <mi>S</mi> <mi>R</mi> <mi>k</mi> </mrow> </msub> <mo>=</mo> <msub> <mrow> <msub> <mi>λ</mi> <mrow> <mi>k</mi> <mi>D</mi> </mrow> </msub> </mrow> <mn>2</mn> </msub> <mo>=</mo> <msub> <mi>λ</mi> <mi>E</mi> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>,<math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>).</p>
Full article ">
13 pages, 2011 KiB  
Article
3D SSY Estimate of EPFM Constraint Parameter under Biaxial Loading for Sensor Structure Design
by Ping Ding and Xin Wang
Sensors 2019, 19(3), 735; https://doi.org/10.3390/s19030735 - 12 Feb 2019
Cited by 3 | Viewed by 3370
Abstract
Conventional sensor structure design and related fracture mechanics analysis are based on the single J-integral parameter approach of elastic-plastic fracture mechanics (EPFM). Under low crack constraint cases, the EPFM one-parameter approach generally gives a stress overestimate, which results in a great cost [...] Read more.
Conventional sensor structure design and related fracture mechanics analysis are based on the single J-integral parameter approach of elastic-plastic fracture mechanics (EPFM). Under low crack constraint cases, the EPFM one-parameter approach generally gives a stress overestimate, which results in a great cost waste of labor and sensor components. The J-A two-parameter approach overcomes this limitation. To enable the extensive application of the J-A approach on theoretical research and sensor engineering problem, under small scale yielding (SSY) conditions, the authors developed an estimate method to conveniently and quickly obtain the constraint (second) parameter A values directly from T-stress. Practical engineering application of sensor structure analysis and design focuses on three-dimensional (3D) structures with biaxial external loading, while the estimate method was developed based on two-dimensional (2D) plain strain condition with uniaxial loading. In the current work, the estimate method was successfully extended to a 3D structure with biaxial loading cases, which is appropriate for practical sensor design. The estimate method extension and validation process was implemented through a thin 3D single edge cracked plate (SECP) specimen. The process implementation was completed in two specified planes of 3D SECP along model thickness. A wide range of material and geometrical properties were applied for the extension and validation process, with material hardening exponent value 3, 5 and 10, and crack length ratio 0.1, 0.3 and 0.7. Full article
(This article belongs to the Special Issue Sensors for Prognostics and Health Management)
Show Figures

Figure 1

Figure 1
<p>Three-dimensional (3D) modified boundary layer model.</p>
Full article ">Figure 2
<p>Finite element mesh for 3D modified boundary layer (MBL) model (crack front).</p>
Full article ">Figure 3
<p>3D single edge cracked plate (SECP) specimen model.</p>
Full article ">Figure 4
<p>Finite element mesh for 3D SECP specimen (crack front).</p>
Full article ">Figure 5
<p><span class="html-italic">A-T</span> relationship curves from 3D MBL formulation, for plane I (<b>a</b>) and II (<b>b</b>), <span class="html-italic">λ</span> = 1.0.</p>
Full article ">Figure 6
<p>Comparisons of predicted 3D SECP <span class="html-italic">A</span> values with finite element analysis (FEA) data, biaxial ratio <span class="html-italic">λ</span> = 1.0, <span class="html-italic">a/W</span> = 0.1, in plane I.</p>
Full article ">Figure 7
<p>Comparisons of predicted 3D SECP <span class="html-italic">A</span> values with FEA data, biaxial ratio <span class="html-italic">λ</span> = 1.0, <span class="html-italic">a/W</span> = 0.7, in plane I.</p>
Full article ">
20 pages, 4487 KiB  
Article
Multi-UAV Reconnaissance Task Assignment for Heterogeneous Targets Based on Modified Symbiotic Organisms Search Algorithm
by Hao-Xiang Chen, Ying Nan and Yi Yang
Sensors 2019, 19(3), 734; https://doi.org/10.3390/s19030734 - 12 Feb 2019
Cited by 55 | Viewed by 5347
Abstract
This paper considers a reconnaissance task assignment problem for multiple unmanned aerial vehicles (UAVs) with different sensor capacities. A modified Multi-Objective Symbiotic Organisms Search algorithm (MOSOS) is adopted to optimize UAVs’ task sequence. A time-window based task model is built for heterogeneous targets. [...] Read more.
This paper considers a reconnaissance task assignment problem for multiple unmanned aerial vehicles (UAVs) with different sensor capacities. A modified Multi-Objective Symbiotic Organisms Search algorithm (MOSOS) is adopted to optimize UAVs’ task sequence. A time-window based task model is built for heterogeneous targets. Then, the basic task assignment problem is formulated as a Multiple Time-Window based Dubins Travelling Salesmen Problem (MTWDTSP). Double-chain encoding rules and several criteria are established for the task assignment problem under logical and physical constraints. Pareto dominance determination and global adaptive scaling factors is introduced to improve the performance of original MOSOS. Numerical simulation and Monte-Carlo simulation results for the task assignment problem are also presented in this paper, whereas comparisons with non-dominated sorting genetic algorithm (NSGA-II) and original MOSOS are made to verify the superiority of the proposed method. The simulation results demonstrate that modified SOS outperforms the original MOSOS and NSGA-II in terms of optimality and efficiency of the assignment results in MTWDTSP. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Body/dynamic system and airborne sensor of UAV.</p>
Full article ">Figure 2
<p>Reconnaissance of strip targets.</p>
Full article ">Figure 3
<p>Reconnaissance of surface targets. (<b>a</b>) Smallest circumscribed rectangle of surface targets; (<b>b</b>) Reconnaissance path.</p>
Full article ">Figure 4
<p>Time window of the reconnaissance task.</p>
Full article ">Figure 5
<p>Schematic diagram of the waiting time and arrival time. (<b>a</b>) Arrive before available; (<b>b</b>) Arrive when available.</p>
Full article ">Figure 6
<p>Distribution of Pareto optimal front.</p>
Full article ">Figure 7
<p>Density function between individuals.</p>
Full article ">Figure 8
<p>Distribution map of all task points.</p>
Full article ">Figure 9
<p>Initial “biological” performance index distribution of Scenario 1.</p>
Full article ">Figure 10
<p>The best frontier distribution in history of Scenario 1.</p>
Full article ">Figure 11
<p>Paths of all UAVs in Scenario 1.</p>
Full article ">Figure 12
<p>Initial “biological” performance index distribution in Scenario 2.</p>
Full article ">Figure 13
<p>The best frontier distribution in history in Scenario 2.</p>
Full article ">Figure 14
<p>Paths of employed UAVs.</p>
Full article ">
12 pages, 2760 KiB  
Article
A Piezoelectric Sensor Signal Analysis Method for Identifying Persons Groups
by Hitoshi Ueno
Sensors 2019, 19(3), 733; https://doi.org/10.3390/s19030733 - 12 Feb 2019
Cited by 3 | Viewed by 4848
Abstract
The is an increasing number of elderly single-person households causing lonely deaths and it is a social problem. We study a watching system for elderly families by laying the piezoelectric sensors inside the house. There are few privacy issues of this system because [...] Read more.
The is an increasing number of elderly single-person households causing lonely deaths and it is a social problem. We study a watching system for elderly families by laying the piezoelectric sensors inside the house. There are few privacy issues of this system because piezoelectric sensor detects only a person’s vibration signal. Furthermore, it has a benefit of sensing the ability for a bio-signal including the respiration cycle and cardiac cycle. We propose a method of identifying the person who is on the sensor by analyzing the frequency spectrum of the bio-signal. Multiple peaks of harmonics originating from the heartbeat appear in the graph of the frequency spectrum. We propose a method to identify people by using the peak shape as a discrimination criterion. Full article
(This article belongs to the Special Issue Sensors Signal Processing and Visual Computing)
Show Figures

Figure 1

Figure 1
<p>Configuration of elderly home monitoring system.</p>
Full article ">Figure 2
<p>Measurement equipment.</p>
Full article ">Figure 3
<p>Original signal from the sensor.</p>
Full article ">Figure 4
<p>Relationship between number of peaks and sharpness.</p>
Full article ">Figure 5
<p>Relationship between number of peaks and age.</p>
Full article ">Figure A1
<p>The signal waveform (<b>left</b>) and the frequency spectrum (<b>right</b>) for the data of 24 subjects.</p>
Full article ">Figure A1 Cont.
<p>The signal waveform (<b>left</b>) and the frequency spectrum (<b>right</b>) for the data of 24 subjects.</p>
Full article ">Figure A1 Cont.
<p>The signal waveform (<b>left</b>) and the frequency spectrum (<b>right</b>) for the data of 24 subjects.</p>
Full article ">Figure A2
<p>The signal waveforms (<b>left</b>) measured three times on different days and calculation results of frequency spectrum (<b>right</b>) for four people.</p>
Full article ">
14 pages, 1947 KiB  
Review
Single-Pixel Imaging and Its Application in Three-Dimensional Reconstruction: A Brief Review
by Ming-Jie Sun and Jia-Min Zhang
Sensors 2019, 19(3), 732; https://doi.org/10.3390/s19030732 - 11 Feb 2019
Cited by 171 | Viewed by 14929
Abstract
Whereas modern digital cameras use a pixelated detector array to capture images, single-pixel imaging reconstructs images by sampling a scene with a series of masks and associating the knowledge of these masks with the corresponding intensity measured with a single-pixel detector. Though not [...] Read more.
Whereas modern digital cameras use a pixelated detector array to capture images, single-pixel imaging reconstructs images by sampling a scene with a series of masks and associating the knowledge of these masks with the corresponding intensity measured with a single-pixel detector. Though not performing as well as digital cameras in conventional visible imaging, single-pixel imaging has been demonstrated to be advantageous in unconventional applications, such as multi-wavelength imaging, terahertz imaging, X-ray imaging, and three-dimensional imaging. The developments and working principles of single-pixel imaging are reviewed, a mathematical interpretation is given, and the key elements are analyzed. The research works of three-dimensional single-pixel imaging and their potential applications are further reviewed and discussed. Full article
(This article belongs to the Special Issue Depth Sensors and 3D Vision)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Schematics of two imaging architectures: (<b>a</b>) In single-pixel imaging, the object is first illuminated by the light source, then imaged by a camera lens onto the focal plane, where a spatial light modulator (SLM) is placed. The SLM modulates the image with different masks, and the reflected or transmitted light intensities are measured by a single-pixel detector. A computational algorithm uses knowledge of the masks. along with their corresponding measurements, to reconstruct an image of the object; (<b>b</b>) in ghost imaging, the object is illuminated by the structured light distribution generated from different masks on an SLM, and the reflected or transmitted light intensities are then measured by a single-pixel detector. A computational algorithm uses knowledge of the masks along, with their corresponding measurements, to reconstruct an image of the object.</p>
Full article ">Figure 2
<p>Single-pixel imaging reconstructions (64 × 64 pixel resolution) using experimental data with different numbers of measurements.</p>
Full article ">Figure 3
<p>Space–time trade-off relationship for performing <span class="html-italic">N</span> measurements: (<b>a</b>) Single-pixel imaging and digital cameras are at the two ends of the curve, while the idea of using <span class="html-italic">T</span> pixels and <span class="html-italic">N</span>/<span class="html-italic">T</span> measurements is a compromise between the two extremes; (<b>b</b>) by using a quadrant detector, the imaging system is 4 times faster in data acquisition [<a href="#B79-sensors-19-00732" class="html-bibr">79</a>].</p>
Full article ">Figure 4
<p>Overview of the image cube method: (<b>a</b>) The illuminating laser pulses back-scattered from a scene are measured as (<b>b</b>) broadened signals; (<b>c</b>) an image cube, containing images at different depths, is obtained using the measured signals; (<b>d</b>) each transverse location has an intensity distribution along the longitudinal axis, indicating depth information; (<b>e</b>) reflectivity and (<b>f</b>) a depth map can be estimated from the image cube, and then be used to reconstruct (<b>g</b>), a 3D image of the scene. Experimental data used in this figure is from the work of [<a href="#B45-sensors-19-00732" class="html-bibr">45</a>].</p>
Full article ">Figure 5
<p>Schematic of a stereo vision based 3D single-pixel imaging.</p>
Full article ">
21 pages, 15553 KiB  
Article
High-Sensitivity Real-Time Tracking System for High-Speed Pipeline Inspection Gauge
by Guanyu Piao, Jingbo Guo, Tiehua Hu and Yiming Deng
Sensors 2019, 19(3), 731; https://doi.org/10.3390/s19030731 - 11 Feb 2019
Cited by 13 | Viewed by 7190
Abstract
Real-time tracking of pipeline inspection gauges (PIGs) is an important aspect of ensuring the safety of oil and gas pipeline inline inspections (ILIs). Transmitting and receiving extremely low frequency (ELF) magnetic signals is one of the preferred methods of tracking. Due to the [...] Read more.
Real-time tracking of pipeline inspection gauges (PIGs) is an important aspect of ensuring the safety of oil and gas pipeline inline inspections (ILIs). Transmitting and receiving extremely low frequency (ELF) magnetic signals is one of the preferred methods of tracking. Due to the increase in physical parameters of the pipeline including transportation speed, wall thickness and burial depth, the ELF magnetic signals received are short transient (1-second duration) and very weak (10 pT), making the existing above-ground-marker (AGM) systems difficult to operate correctly. Based on the short transient very weak characteristics of ELF signals studied with a 2-D finite-element method (FEM) simulation, a data fusion model was derived to fuse the envelope decay rates of ELF signals by a least square (LS) criterion. Then, a fast-decision-tree (FDT) method is proposed to estimate the fused envelope decay rate to output the maximized orthogonal signal power for the signal detection through a determined topology and a fast calculation process, which was demonstrated to have excellent real-time detection performance. We show that simulation and experimental results validated the effectiveness of the proposed FDT method, and describe the high-sensitivity detection and real-time implementation of a high-speed PIG tracking system, including a transmitter, a receiver, and a pair of orthogonal search coil sensors. Full article
(This article belongs to the Special Issue Sensing in Oil and Gas Applications)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Working principle of above-ground-marker (AGM) system.</p>
Full article ">Figure 2
<p>Schematic of 2-D finite-element method (FEM) simulation model.</p>
Full article ">Figure 3
<p>Simulated time-domain extremely low frequency (ELF) signals and envelopes of <span class="html-italic">X</span>-axis and <span class="html-italic">Y</span>-axis, respectively: (<b>a</b>) <span class="html-italic">X</span>-axis signals in 15 m/s; (<b>b</b>) <span class="html-italic">Y</span>-axis signals in 15 m/s; (<b>c</b>) <span class="html-italic">X</span>-axis envelopes in 5 m/s, 10 m/s and 15 m/s. (<b>d</b>) <span class="html-italic">Y</span>-axis envelopes in 5 m/s, 10 m/s and 15 m/s.</p>
Full article ">Figure 4
<p>Amplitudes of <span class="html-italic">X</span>-axis and <span class="html-italic">Y</span>-axis ELF signals change with pipe wall thickness and burial depth, respectively: (<b>a</b>) <span class="html-italic">X</span>-axis; (<b>b</b>) <span class="html-italic">Y</span>-axis.</p>
Full article ">Figure 5
<p>The relationships between <span class="html-italic">k<sub>y</sub></span>, <span class="html-italic">k<sub>xy</sub></span> and normalized <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>P</mi> <mo stretchy="false">^</mo> </mover> <mi>y</mi> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>P</mi> <mo stretchy="false">^</mo> </mover> <mi mathvariant="italic">xy</mi> </msub> </mrow> </semantics></math>: (<b>a</b>) <span class="html-italic">k<sub>y</sub></span>; (<b>b</b>) <span class="html-italic">k<sub>xy</sub></span> and <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>P</mi> <mo stretchy="false">^</mo> </mover> <mi mathvariant="italic">xy</mi> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>Topological structure of the fast-decision-tree (FDT) method.</p>
Full article ">Figure 7
<p>(<b>a</b>–<b>c</b>) Probability of detection (POD) versus probability of false alarm (PFA) for 5 m/s, 10 m/s and 15 m/s when signal-to-noise ratio (SNR) is ‒3 dB, respectively; (<b>d</b>–<b>f</b>) POD versus SNR in 5 m/s, 10 m/s and 15 m/s when PFA is 0.01, respectively.</p>
Full article ">Figure 7 Cont.
<p>(<b>a</b>–<b>c</b>) Probability of detection (POD) versus probability of false alarm (PFA) for 5 m/s, 10 m/s and 15 m/s when signal-to-noise ratio (SNR) is ‒3 dB, respectively; (<b>d</b>–<b>f</b>) POD versus SNR in 5 m/s, 10 m/s and 15 m/s when PFA is 0.01, respectively.</p>
Full article ">Figure 8
<p>The newly designed and developed tracking system and 20˝ pipeline inspection gauge (PIG): (<b>a</b>) Transmitting coil and circuit; (<b>b</b>) ELF transmitter; (<b>c</b>) 20˝ PIG with transmitter; (<b>d</b>) Circuit of receiver; (<b>e</b>) ELF receiver; (<b>f</b>) A pair of search coil sensors.</p>
Full article ">Figure 9
<p>Structure of search coil sensor.</p>
Full article ">Figure 10
<p>Time-domain ELF signals and normalized test statistics when PFA = 10<sup>−4</sup>: (<b>a</b>–<b>c</b>) High SNR situations with low, medium and high speeds, respectively; (<b>d</b>–<b>f</b>) Low SNR situations with low, medium and high speeds, respectively.</p>
Full article ">Figure 10 Cont.
<p>Time-domain ELF signals and normalized test statistics when PFA = 10<sup>−4</sup>: (<b>a</b>–<b>c</b>) High SNR situations with low, medium and high speeds, respectively; (<b>d</b>–<b>f</b>) Low SNR situations with low, medium and high speeds, respectively.</p>
Full article ">Figure 11
<p>Flow chart of the proposed FDT method and tracking system developed.</p>
Full article ">Figure 12
<p>Normalized power spectrum when SNR is 0 dB and speed is 15 m/s.</p>
Full article ">Figure 13
<p>PDF of <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>β</mi> <mo stretchy="false">^</mo> </mover> <mi mathvariant="italic">xy</mi> </msub> </mrow> </semantics></math> and estimated SNR when the speed is 15 m/s: (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>β</mi> <mo stretchy="false">^</mo> </mover> <mi mathvariant="italic">xy</mi> </msub> </mrow> </semantics></math>; (<b>b</b>) Estimated SNR.</p>
Full article ">
20 pages, 4069 KiB  
Article
Design of a 1-bit MEMS Gyroscope using the Model Predictive Control Approach
by Xiaofeng Wu, Zhicheng Xie, Xueliang Bai and Trevor Kwan
Sensors 2019, 19(3), 730; https://doi.org/10.3390/s19030730 - 11 Feb 2019
Cited by 6 | Viewed by 4640
Abstract
In this paper, a bi-level Delta-Sigma modulator-based MEMS gyroscope design is presented based on a Model Predictive Control (MPC) approach. The MPC is popular because of its capability of handling hard constraints. In this work, we propose to combine the 1-bit nature of [...] Read more.
In this paper, a bi-level Delta-Sigma modulator-based MEMS gyroscope design is presented based on a Model Predictive Control (MPC) approach. The MPC is popular because of its capability of handling hard constraints. In this work, we propose to combine the 1-bit nature of the bi-level Delta-Sigma modulator output with the MPC to develop a 1-bit processing-based MPC (OBMPC). This paper will focus on the affine relationship between the 1-bit feedback and the in-loop MPC controller, as this can potentially remove the multipliers from the controller. In doing so, the computational requirement of the MPC control is significantly alleviated, which makes the 1-bit MEMS Gyroscope feasible for implementation. In addition, a stable constrained MPC is designed, so that the input will not overload the quantizer while maintaining a higher Signal-to-Noise Ratio (SNR). Full article
(This article belongs to the Special Issue MEMS Technology Based Sensors for Human Centered Applications)
Show Figures

Figure 1

Figure 1
<p>System level diagram of the dynamic system of a mechanical sensor.</p>
Full article ">Figure 2
<p>Structure of a typical Δ∑ modulator-based MEMS gyroscope.</p>
Full article ">Figure 3
<p>A third order 1-bit MEMS gyroscope with resonator.</p>
Full article ">Figure 4
<p>Structure of an <span class="html-italic">n</span>th order 1-bit ∆∑ modulator.</p>
Full article ">Figure 5
<p>N<span class="html-italic">th</span> order Δ∑ modulator with parallel state variables (Thick lines denote vector routing).</p>
Full article ">Figure 6
<p>Linearized <span class="html-italic">n</span>th order Δ∑ modulator with parallel state.</p>
Full article ">Figure 7
<p>The OBMPC design for an nth order Δ∑ modulator (Thick lines denote vector routing).</p>
Full article ">Figure 8
<p>MEMS gyroscope using the OBMPCbased Δ∑ modulator (Thick lines denote vector routing).</p>
Full article ">Figure 9
<p>Design of a high order1-bit MEMS gyroscopeusing the OBMPC.</p>
Full article ">Figure 10
<p>(<b>a</b>) Simulation structure of the <math display="inline"><semantics> <mo>Δ</mo> </semantics></math>∑ modulator based MEMS gyroscope; (<b>b</b>) Simulation structure of the OBMPC-based MEMS gyroscope.</p>
Full article ">Figure 11
<p>Results for the Δ∑ modulator-based MEMS gyroscopes with Amplitude = 0.6 rad/s: (<b>a</b>) MEMS gyroscopes with sinusoidal input; (<b>b</b>) Comparison of the quantizer input; (<b>c</b>) Spectra comparison.</p>
Full article ">Figure 12
<p>Results for the OBMPC-based MEMS gyroscope with Amplitude = 1.1 rad/s (<b>a</b>) MEMS gyroscope with sinusoidal input; (<b>b</b>) Spectra comparison; (<b>c</b>) Comparison of the quantizer input.</p>
Full article ">Figure 12 Cont.
<p>Results for the OBMPC-based MEMS gyroscope with Amplitude = 1.1 rad/s (<b>a</b>) MEMS gyroscope with sinusoidal input; (<b>b</b>) Spectra comparison; (<b>c</b>) Comparison of the quantizer input.</p>
Full article ">Figure 13
<p>(<b>a</b>) SNR and (<b>b</b>) MSE of the quantization noise with different sampling frequency.</p>
Full article ">Figure 14
<p>(<b>a</b>) MSE and (<b>b</b>) SNR of the quantization noise with different input amplitude.</p>
Full article ">Figure 15
<p>Comparisons between the OBMPC gyroscope and the conventional Δ∑ modulator-based gyroscope.</p>
Full article ">
18 pages, 7621 KiB  
Article
A High-Computational Efficiency Human Detection and Flow Estimation Method Based on TOF Measurements
by Weihang Wang, Peilin Liu, Rendong Ying, Jun Wang, Jiuchao Qian, Jialu Jia and Jiefeng Gao
Sensors 2019, 19(3), 729; https://doi.org/10.3390/s19030729 - 11 Feb 2019
Cited by 13 | Viewed by 4643
Abstract
State-of-the-art human detection methods focus on deep network architectures to achieve higher recognition performance, at the expense of huge computation. However, computational efficiency and real-time performance are also important evaluation indicators. This paper presents a fast real-time human detection and flow estimation method [...] Read more.
State-of-the-art human detection methods focus on deep network architectures to achieve higher recognition performance, at the expense of huge computation. However, computational efficiency and real-time performance are also important evaluation indicators. This paper presents a fast real-time human detection and flow estimation method using depth images captured by a top-view TOF camera. The proposed algorithm mainly consists of head detection based on local pooling and searching, classification refinement based on human morphological features, and tracking assignment filter based on dynamic multi-dimensional feature. A depth image dataset record with more than 10k entries and departure events with detailed human location annotations is established. Taking full advantage of the distance information implied in the depth image, we achieve high-accuracy human detection and people counting with accuracy of 97.73% and significantly reduce the running time. Experiments demonstrate that our algorithm can run at 23.10 ms per frame on a CPU platform. In addition, the proposed robust approach is effective in complex situations such as fast walking, occlusion, crowded scenes, etc. Full article
(This article belongs to the Special Issue Depth Sensors and 3D Vision)
Show Figures

Figure 1

Figure 1
<p>Distortion rectification and Gaussian filtering. (<b>a</b>) Raw depth image with distortion and overexposure; (<b>b</b>) corrected and rectified depth image. The objects in the scene are adjusted to the correct size. The overexposed pixel values have been corrected; (<b>c</b>) depth image with pixel values converted to the ground plane; (<b>d</b>) Visualization of pooled image. The red area symbolizes proximity to the TOF camera. Blue pixels mean far distance from the TOF camera.</p>
Full article ">Figure 2
<p>Local pooling. For visualization, we resize the pooled image to 320 × 240. Each pixel in the pooled image is a 10 × 10 block.</p>
Full article ">Figure 3
<p>Polynomial equation fitting with different degrees.</p>
Full article ">Figure 4
<p>Local pooling and searching. (<b>a</b>) The input depth images from simple to complex scenes; (<b>b</b>) visualization of pooled images; (<b>c</b>) visualization of local searching and connect component labelling; (<b>d</b>) results of the human detection module.</p>
Full article ">Figure 5
<p>Architecture of the classification refinement network.</p>
Full article ">Figure 6
<p>The procedure of computing elements in the covariance matrix <b><span class="html-italic">C</span></b> and mask <b><span class="html-italic">M</span></b>.</p>
Full article ">Figure 7
<p>Schematic representation of the counting strategy.</p>
Full article ">Figure 8
<p>The experimental environment and installation of TOF camera in the laboratory scene.</p>
Full article ">Figure 9
<p>The results of different human detection methods for the same input depth image in our dataset. The input is a typical multiple people scene with occlusion, where five persons in total move in different directions. (<b>a</b>) Visualization of processing and results combined with distance transformation and watershed algorithm; (<b>b</b>) head central location points clustering based on local maxima search; (<b>c</b>) the proposed human detection method based on local pooling and searching; (<b>d</b>) the proposed approach combined with classification refinement removing false positive candidates; (<b>e</b>) head segmentation results based on convolutional network U-Net3.</p>
Full article ">Figure 10
<p>Performance of our human detection and flow estimation approach on the TVHeads dataset with 96.96% AP and 40 fps.</p>
Full article ">Figure 11
<p>Performance of our human detection and flow estimation approach on our dataset with 97.73% AP and 40 fps.</p>
Full article ">
9 pages, 3137 KiB  
Article
Hybrid Printed Energy Harvesting Technology for Self-Sustainable Autonomous Sensor Application
by Sangkil Kim, Manos M. Tentzeris and Apostolos Georgiadis
Sensors 2019, 19(3), 728; https://doi.org/10.3390/s19030728 - 11 Feb 2019
Cited by 21 | Viewed by 5579
Abstract
In this paper, the far-field energy harvesting system for self-sustainable wireless autonomous sensor application is presented. The proposed autonomous sensor system consists of a wireless power supplier (active antenna) and far-field energy harvesting technology-enabled autonomous battery-less sensors. The wireless power supplier converts solar [...] Read more.
In this paper, the far-field energy harvesting system for self-sustainable wireless autonomous sensor application is presented. The proposed autonomous sensor system consists of a wireless power supplier (active antenna) and far-field energy harvesting technology-enabled autonomous battery-less sensors. The wireless power supplier converts solar power to electromagnetic power in order to transfer power to multiple autonomous sensors wirelessly. The autonomous sensors have far-field energy harvesters which convert transmitted RF power to voltage regulated DC power to power-on the sensor system. The hybrid printing technology was chosen to build the autonomous sensors and the wireless power suppliers. Two popular hybrid electronics technologies (direct nano-particle printing and indirect copper thin film printing techniques) are discussed in detail. Full article
(This article belongs to the Special Issue Passive Electromagnetic Sensors for Autonomous Wireless Networks)
Show Figures

Figure 1

Figure 1
<p>Application scenario of self-sustainable autonomous sensor system.</p>
Full article ">Figure 2
<p>A typical topology of flexible hybrid printed electronics.</p>
Full article ">Figure 3
<p>Fabrication process of printed flexible conductive film: (<b>a</b>) direct inkjet printing of silver nano-particles and (<b>b</b>) indirect printing of thin copper (Cu) film (electroless electroplating).</p>
Full article ">Figure 4
<p>A solar-powered active antenna (solar-to-RF power converter): (<b>a</b>) System schematic and (<b>b</b>) fabricated RF active antenna.</p>
Full article ">Figure 5
<p>Measured radiation patterns: (<b>a</b>) E-plane (xz-plane) and (<b>b</b>) H-plane (yz-plane).</p>
Full article ">Figure 6
<p>Fabricated RF energy harvesting system prototype utilizing the hybrid printed electronic technology: (<b>a</b>) square loop antenna and (<b>b</b>) RF-DC converter with a boost DC-DC converter.</p>
Full article ">Figure 7
<p>Measurement results. (<b>a</b>) The loop antenna and (<b>b</b>) RF-DC converter.</p>
Full article ">Figure 8
<p>Cross-section of printed metal film: (<b>a</b>) Direct silver nano-particle printing method and (<b>b</b>) indirect thin copper film printing method.</p>
Full article ">Figure 9
<p>Surface SEM (Scanning Electron Microscope) images of (<b>a</b>) direct silver nano-particle printing method and (<b>b</b>) indirect thin copper film printing method.</p>
Full article ">Figure 10
<p>Conductivity values of the printed metal films.</p>
Full article ">
16 pages, 685 KiB  
Article
Improving IoT Botnet Investigation Using an Adaptive Network Layer
by João Marcelo Ceron, Klaus Steding-Jessen, Cristine Hoepers, Lisandro Zambenedetti Granville and Cíntia Borges Margi
Sensors 2019, 19(3), 727; https://doi.org/10.3390/s19030727 - 11 Feb 2019
Cited by 56 | Viewed by 8249
Abstract
IoT botnets have been used to launch Distributed Denial-of-Service (DDoS) attacks affecting the Internet infrastructure. To protect the Internet from such threats and improve security mechanisms, it is critical to understand the botnets’ intents and characterize their behavior. Current malware analysis solutions, when [...] Read more.
IoT botnets have been used to launch Distributed Denial-of-Service (DDoS) attacks affecting the Internet infrastructure. To protect the Internet from such threats and improve security mechanisms, it is critical to understand the botnets’ intents and characterize their behavior. Current malware analysis solutions, when faced with IoT, present limitations in regard to the network access containment and network traffic manipulation. In this paper, we present an approach for handling the network traffic generated by the IoT malware in an analysis environment. The proposed solution can modify the traffic at the network layer based on the actions performed by the malware. In our study case, we investigated the Mirai and Bashlite botnet families, where it was possible to block attacks to other systems, identify attacks targets, and rewrite botnets commands sent by the botnet controller to the infected devices. Full article
(This article belongs to the Special Issue Threat Identification and Defence for Internet-of-Things)
Show Figures

Figure 1

Figure 1
<p>Adaptive network layer overview.</p>
Full article ">Figure 2
<p>Mirai Scan Signature: The TCP scan packets are instantiated using the same value for the fields destination IP address and TCP Sequence number.</p>
Full article ">Figure 3
<p>C&amp;C Protocol: Message exchange process implemented by Bashlite and Mirai malware families.</p>
Full article ">Figure 4
<p>Malware analysis environment setup.</p>
Full article ">Figure 5
<p>Bashlite’s execution flow: the bot initially establishes a communication with the C&amp;C and then performs the propagation attacks scans.</p>
Full article ">Figure 6
<p>Mirai’s execution flow: The malware initiates the propagation scan process and simultaneously contacts its C&amp;C.</p>
Full article ">Figure 7
<p>Bashlite egress traffic: the number of packets per hour generated by the analyzed malware.</p>
Full article ">Figure 8
<p>Mirai egress traffic: the number of packets per hour generated by the analyzed malware.</p>
Full article ">
14 pages, 2699 KiB  
Article
Enhanced Hydrogen Detection in ppb-Level by Electrospun SnO2-Loaded ZnO Nanofibers
by Jae-Hyoung Lee, Jin-Young Kim, Jae-Hun Kim and Sang Sub Kim
Sensors 2019, 19(3), 726; https://doi.org/10.3390/s19030726 - 11 Feb 2019
Cited by 47 | Viewed by 5774
Abstract
High-performance hydrogen sensors are important in many industries to effectively address safety concerns related to the production, delivering, storage and use of H2 gas. Herein, we present a highly sensitive hydrogen gas sensor based on SnO2-loaded ZnO nanofibers (NFs). The [...] Read more.
High-performance hydrogen sensors are important in many industries to effectively address safety concerns related to the production, delivering, storage and use of H2 gas. Herein, we present a highly sensitive hydrogen gas sensor based on SnO2-loaded ZnO nanofibers (NFs). The xSnO2-loaded (x = 0.05, 0.1 and 0.15) ZnO NFs were fabricated using an electrospinning technique followed by calcination at high temperature. Microscopic analyses demonstrated the formation of NFs with expected morphology and chemical composition. Hydrogen sensing studies were performed at various temperatures and the optimal working temperature was selected as 300 °C. The optimal gas sensor (0.1 SnO2 loaded ZnO NFs) not only showed a high response to 50 ppb hydrogen gas, but also showed an excellent selectivity to hydrogen gas. The excellent performance of the gas sensor to hydrogen gas was mainly related to the formation of SnO2-ZnO heterojunctions and the metallization effect of ZnO. Full article
(This article belongs to the Section Chemical Sensors)
Show Figures

Figure 1

Figure 1
<p>Illustrations of (<b>a</b>) the electrospinning method for the preparation of xSnO<sub>2</sub>-loaded (x = 0.05, 0.1 and 0.15) ZnO nanofibers (NFs) and (<b>b</b>) drop casting of NFs onto the patterned electrode of the sensing substrate.</p>
Full article ">Figure 2
<p>Field-emission scanning electron microscopy (FE-SEM) micrographs of (<b>a</b>) as-spun 0.1 SnO<sub>2</sub>-loaded ZnO NFs and calcined, (<b>b</b>) 0.05 SnO<sub>2</sub>-loaded ZnO NFs, (<b>c</b>) 0.1 SnO<sub>2</sub>-loaded ZnO NFs, and (<b>d</b>) 0.15 SnO<sub>2</sub>-loaded ZnO NFs. Insets in (<b>b</b>–<b>d</b>) show corresponding higher magnification FE-SEM images. (<b>e</b>) Low-magnification TEM image and (<b>f</b>) high-resolution TEM image of 0.1 SnO<sub>2</sub>-loaded ZnO NFs. (<b>g-1</b>), (<b>g-2</b>) and (<b>g-3</b>) energy-dispersive X-ray spectroscopy (EDS) O, Zn and Sn elemental mapping of 0.1 SnO<sub>2</sub>-loaded ZnO NFs taken from (<b>e</b>), respectively.</p>
Full article ">Figure 3
<p>XRD pattern of the 0.1 SnO<sub>2</sub>-loaded ZnO NFs.</p>
Full article ">Figure 4
<p>(<b>a</b>) Dynamic response curves of 0.1 SnO<sub>2</sub>-loaded ZnO NFs to 50 ppb, 100 ppb, 1 ppm and 5 ppm H<sub>2</sub> gas at different temperatures. (<b>b</b>) Calculated hydrogen response as a function of operating temperature.</p>
Full article ">Figure 5
<p>(<b>a</b>) Dynamic response curves of SnO<sub>2</sub>-loaded ZnO NFs gas sensors to 50 ppb, 100 ppb, 1 ppm and 5 ppm H<sub>2</sub> gas at 300 °C. (<b>b</b>) Calculated hydrogen response as a function of hydrogen concentration. (<b>c</b>) Calculated hydrogen response as a function of SnO<sub>2</sub> loading amount.</p>
Full article ">Figure 6
<p>Response and recovery times of xSnO<sub>2</sub> (x = 0, 0.05, 0.1 and 0.15) loaded ZnO NFs to 5 ppm H<sub>2</sub> gas at 300 °C.</p>
Full article ">Figure 7
<p>(<b>a</b>) Dynamic response curves of the 0.1 SnO<sub>2</sub>-loaded ZnO NFs gas sensor to H<sub>2</sub>, NO<sub>2</sub> and CO gas at 300 °C. (<b>b</b>) Response histogram of the 0.1 SnO<sub>2</sub>-loaded ZnO NFs gas sensor at 300 °C.</p>
Full article ">Figure 8
<p>(<b>a</b>) Dynamic resistance curves of the 0.1 SnO<sub>2</sub>-loaded ZnO NF gas sensor to 5 ppm H<sub>2</sub> gas in the presence of 0–79.4% relative humidity (RH). (<b>b</b>) Response to 5 ppm H<sub>2</sub> versus RH%.</p>
Full article ">Figure 9
<p>(<b>a</b>) Long-term stability curves of fresh and 6 months aged 0.1 SnO<sub>2</sub>-loaded ZnO NF gas sensor. (<b>b</b>) Response versus H<sub>2</sub> gas concentration of fresh and 6 months aged 0.1 SnO<sub>2</sub>-loaded ZnO NF gas sensor.</p>
Full article ">Figure 10
<p>(<b>a</b>) Dynamic resistance curve of the 0.1 SnO<sub>2</sub>-loaded ZnO NF gas sensor towards 20 ppb to 100 ppm of H<sub>2</sub> gas. (<b>b</b>) Response versus H<sub>2</sub> gas concentration.</p>
Full article ">Figure 11
<p>Schematics of sensing mechanism in the SnO<sub>2</sub>-loaded ZnO NFs gas sensor. (<b>a</b>) Energy-level diagram of ZnO-SnO<sub>2</sub> in vacuum. The change of potential barriers in (<b>b</b>) air, (<b>c</b>) NO<sub>2</sub>, (<b>d</b>) CO, and (<b>e</b>) H<sub>2</sub>.</p>
Full article ">
14 pages, 3896 KiB  
Article
Establishment and Verification of the Cutting Grinding Force Model for the Disc Wheel Based on Piezoelectric Sensors
by Jing Ni, Kai Feng, M.S.H. Al-Furjan, Xiaojiao Xu and Jing Xu
Sensors 2019, 19(3), 725; https://doi.org/10.3390/s19030725 - 11 Feb 2019
Cited by 10 | Viewed by 4025
Abstract
In this paper, a new model of cutting grinding force for disc wheels is presented. Initially, it was proposed that the grinding cutting force was formed by the grinding force and cutting force in combination. Considering the single-grit morphology, the single-grit average grinding [...] Read more.
In this paper, a new model of cutting grinding force for disc wheels is presented. Initially, it was proposed that the grinding cutting force was formed by the grinding force and cutting force in combination. Considering the single-grit morphology, the single-grit average grinding depth, the effective number of grits, and the contact arc length between the grit and the workpiece comprehensively, the grinding force model and the cutting force model were established, respectively. Then, a universal grinding cutting force model was optimized by introducing the effective grit coefficient model, dependent on the probability statistical method and the grit height coefficient model with Rayleigh’s distribution theory. Finally, according to the different proportions of the grinding force and cutting force, the grinding cutting force model, with multi-particles, was established. Simulation and experimental results based on piezoelectric sensors showed that the proposed model could predict the intermittent grinding cutting force well. Moreover, the inclusion of the grit height coefficient and the effective grits number coefficient improved the modeling accuracy. The error between the simulation and experimental findings in grinding cutting force was reduced to 7.8% in comparison with the traditional model. In addition, the grinding cutting force can be divided into three segments; increasing, steadiness, and decreasing, respectively found through modeling. Full article
Show Figures

Figure 1

Figure 1
<p>The grinding principle of the disc wheel.</p>
Full article ">Figure 2
<p>The principle diagram of the simplification of single-grit: (<b>a</b>) The surface of the disc wheel; (<b>b</b>) Single-grit morphology; (<b>c</b>) The single grain model.</p>
Full article ">Figure 3
<p>The protrusion height of the grits: (<b>a</b>) The surface of the disc wheel; (<b>b</b>) An equal height grit [<a href="#B24-sensors-19-00725" class="html-bibr">24</a>]; (<b>c</b>) An unequal height grit.</p>
Full article ">Figure 4
<p>A diagram of the grain protrusion height.</p>
Full article ">Figure 5
<p>The motion track and contact arc length of the grits in grinding.</p>
Full article ">Figure 6
<p>A microscopic amplification diagram of the grinding wheel. (<b>a</b>) Area a; (<b>b</b>) Area b; (<b>c</b>) Area c; (<b>d</b>) Area d; (<b>e</b>) Area e; (<b>f</b>) Area f.</p>
Full article ">Figure 7
<p>The bulge height distribution diagram of the surface grit particle in the grinding wheel: (<b>a</b>) The bulge height distribution diagram; (<b>b</b>) The position of the penetration depths.</p>
Full article ">Figure 8
<p>The relationship between the penetration depths and the effective grits per unit length [<a href="#B25-sensors-19-00725" class="html-bibr">25</a>].</p>
Full article ">Figure 9
<p>The experimental set-up used for the grinding tests.</p>
Full article ">Figure 10
<p>The simulation and experimental horizontal grinding cutting force (<span class="html-italic">F</span><sub>x</sub>): (<b>a</b>) Initial stage; (<b>b</b>) Intermediary stage; (<b>c</b>) Final stage. <span class="html-italic">ε =</span> the grit height coefficient and <span class="html-italic">η</span> = the effective grit coefficient.</p>
Full article ">
10 pages, 8637 KiB  
Article
Resonant Photoacoustic Spectroscopy of NO2 with a UV-LED Based Sensor
by Johannes Kapp, Christian Weber, Katrin Schmitt, Hans-Fridtjof Pernau and Jürgen Wöllenstein
Sensors 2019, 19(3), 724; https://doi.org/10.3390/s19030724 - 11 Feb 2019
Cited by 22 | Viewed by 7695
Abstract
Nitrogen dioxide (NO2) is a poisonous trace gas that requires monitoring in urban areas. Accurate measurement in sub-ppm concentrations represents a wide application field for suitable economical sensors. We present a novel approach to measure NO2 with a photoacoustic sensor [...] Read more.
Nitrogen dioxide (NO2) is a poisonous trace gas that requires monitoring in urban areas. Accurate measurement in sub-ppm concentrations represents a wide application field for suitable economical sensors. We present a novel approach to measure NO2 with a photoacoustic sensor using a T-shaped resonance cell. An inexpensive UV-LED with a peak wavelength of 405 nm as radiation source as well as a commercial MEMS microphone for acoustic detection were used. In this work, a cell has been developed that enables a “non-contact” feedthrough of the divergent LED beam. Thus, unwanted background noise due to absorption on the inside walls is minimized. As part of the development, an acoustic simulation has been carried out to find the resonance frequencies and to visualize the resulting standing wave patterns in various geometries. The pressure amplitude was calculated for different shapes and sizes. A model iteratively optimized in this way forms the basis of a construction that was built for gas measurement by rapid prototyping methods. The real resonance frequencies were compared to the ones found in simulation. The limit of detection was determined in a nitrogen dioxide measurement to be 200 ppb (6 σ) for a cell made of aluminum. Full article
(This article belongs to the Collection Gas Sensors)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The emission spectrum of the used UV-LED with a peak wavelength of 405 nm. The datasheet states a spectral bandwidth of 15 nm [<a href="#B8-sensors-19-00724" class="html-bibr">8</a>]. The LED spectrum fits well to the absorption maximum of NO<math display="inline"> <semantics> <msub> <mrow/> <mn>2</mn> </msub> </semantics> </math> [<a href="#B9-sensors-19-00724" class="html-bibr">9</a>].</p>
Full article ">Figure 2
<p>(<b>a</b>) Schematic drawing: Diverging LED light within a non-optimized resonant cell similar to that proposed in Ref. [<a href="#B5-sensors-19-00724" class="html-bibr">5</a>]. The light hits the inner walls at the red dashed line, causing the background signal. The lower signal to offset ratio results in a worse limit of detection and stability problems. (<b>b</b>) Concept for use of LED light. The T-cell consists of an absorption chamber that enables the non-contact feedthrough of the beam. Perpendicular to it there is the resonance cylinder, where the standing wave pattern is formed.</p>
Full article ">Figure 3
<p>(<b>a</b>) Cross section of the cell with simulated pressure amplitude corresponding to standing wave patterns of different harmonics. One can observe the first harmonic at 1850 Hz, the third harmonic at 4170 Hz and the fifth at 6710 Hz. Pressure peaks (red) are at the spot, where the microphone will be assembled. As expected, no standing wave pattern is formed within the absorption chamber. (<b>b</b>) Picture of the aluminum cell with mounted microphone and windows (100 mm × 40 mm × 40 mm).</p>
Full article ">Figure 4
<p>(<b>a</b>) Sensor setup in principle. (<b>b</b>) Picture of the setup with optical components mounted in a Thorlabs cage system (100 mm × 40 mm × 750 mm).</p>
Full article ">Figure 5
<p>Frequency response of the resonant photoacoustic cell with peaks of different harmonics: 3rd at 3727 Hz, 5th at 6275 Hz, 7th at 8635 Hz. The 3rd harmonic gives the best signal and has a Q-factor of around 12.4.</p>
Full article ">Figure 6
<p>(<b>a</b>) Lock-in filtered microphone signal (integration time of 1.14 s) while sweeping the LED modulation frequency over the third harmonic of the resonator at different NO<math display="inline"> <semantics> <msub> <mrow/> <mn>2</mn> </msub> </semantics> </math> concentrations. A reading is taken every 0.75 s. The resonance frequency shift is due to the dilution of NO<math display="inline"> <semantics> <msub> <mrow/> <mn>2</mn> </msub> </semantics> </math> in synthetic air. (<b>b</b>) Mean value of the three highest points of the sweep, together with their standard deviation (<math display="inline"> <semantics> <mrow> <mo>±</mo> <mn>1</mn> <mspace width="0.166667em"/> <mi>σ</mi> </mrow> </semantics> </math>) and the linear correlation.</p>
Full article ">Figure 7
<p>Time resolved gas measurement result of the photoacoustic sensor normalized to the zero concentration. Due to the sweep over the complete resonance peak the data rate is around 0.25 min<math display="inline"> <semantics> <msup> <mrow/> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </semantics> </math>. The noise analysis showed a noise equivalent (1 <math display="inline"> <semantics> <mi>σ</mi> </semantics> </math>) of approximately 32 ppb. Therefore the sensor can detect concentrations of around 200 ppb with 6 <math display="inline"> <semantics> <mi>σ</mi> </semantics> </math> confidence.</p>
Full article ">Figure 8
<p>Calculated Allan deviation of the ambient air signal over a measuring time of 72 h. The left axis shows the Allan deviation in counts. On the right axis it is converted into ppb using the sensors sensitivity of 125 counts per ppm.</p>
Full article ">Figure A1
<p>Measurement with the same setup but with a H-cell similar to the work of El-Safoury et al. [<a href="#B14-sensors-19-00724" class="html-bibr">14</a>]. The H-cell has a resonator length of 40 mm and a diameter of 3 mm. The buffer volumina have both a diameter and a length of 20 mm. In this setup, the offset is equivalent to 65.4 ppm NO<math display="inline"> <semantics> <msub> <mrow/> <mn>2</mn> </msub> </semantics> </math>. Due to the high offset, the zero gas signal is more vulnerable to drift effects and therefore less stable.</p>
Full article ">
12 pages, 3866 KiB  
Article
A Wear Debris Segmentation Method for Direct Reflection Online Visual Ferrography
by Song Feng, Guang Qiu, Jiufei Luo, Leng Han, Junhong Mao and Yi Zhang
Sensors 2019, 19(3), 723; https://doi.org/10.3390/s19030723 - 11 Feb 2019
Cited by 12 | Viewed by 4554
Abstract
Wear debris in lube oil was observed using a direct reflection online visual ferrograph (OLVF) to monitor the machine running condition and judge wear failure online. The existing research has mainly concentrated on extraction of wear debris concentration and size according to ferrograms [...] Read more.
Wear debris in lube oil was observed using a direct reflection online visual ferrograph (OLVF) to monitor the machine running condition and judge wear failure online. The existing research has mainly concentrated on extraction of wear debris concentration and size according to ferrograms under transmitted light. Reports on the segmentation algorithm of the wear debris ferrograms under reflected light are lacking. In this paper, a wear debris segmentation algorithm based on edge detection and contour classification is proposed. The optimal segmentation threshold is obtained by an adaptive canny algorithm, and the contour classification filling method is applied to overcome the problems of excessive brightness or darkness of some wear debris that is often neglected by traditional segmentation algorithms such as the Otsu and Kittler algorithms. Full article
(This article belongs to the Special Issue Sensors for Prognostics and Health Management)
Show Figures

Figure 1

Figure 1
<p>Proposed wear debris segmentation method.</p>
Full article ">Figure 2
<p>The back-to-back gearbox rig equipped with an online visual ferrograph (OLVF): (<b>a</b>) photograph; (<b>b</b>) schematic.</p>
Full article ">Figure 3
<p>Typical OLVF ferrogram under reflected light: (<b>a</b>) wear debris image; (<b>b</b>) image background.</p>
Full article ">Figure 4
<p>Wear debris image enhancement: (<b>a</b>) background subtraction; (<b>b</b>) black hat operation; (<b>c</b>) superimposed wear debris image; (<b>d</b>) bilateral filtering.</p>
Full article ">Figure 4 Cont.
<p>Wear debris image enhancement: (<b>a</b>) background subtraction; (<b>b</b>) black hat operation; (<b>c</b>) superimposed wear debris image; (<b>d</b>) bilateral filtering.</p>
Full article ">Figure 5
<p>Superimposed wear debris image: (<b>a</b>) Otsu method; (<b>b</b>) superposition of the wear debris image after binarization and treatment by the adaptive canny algorithm.</p>
Full article ">Figure 6
<p>Final binarized wear debris image.</p>
Full article ">Figure 7
<p>Topography of the grayscale ferrogram.</p>
Full article ">Figure 8
<p>Images produced by different binarization methods: (<b>a</b>) Otsu method; (<b>b</b>) iterative method; (<b>c</b>) Kittler method; (<b>d</b>) Niblack method; (<b>e</b>) cyclic threshold method; and (<b>f</b>) fixed threshold method.</p>
Full article ">Figure 9
<p>Segmentation of ferrograms with small wear debris: (<b>a</b>–<b>c</b>) reflected light OLVF ferrograms; (<b>d</b>–<b>f</b>) binarization images.</p>
Full article ">Figure 10
<p>Segmentation of ferrograms with large wear debris: (<b>a</b>–<b>c</b>) reflected light OLVF ferrograms; (<b>d</b>–<b>f</b>) binarization images.</p>
Full article ">
22 pages, 5190 KiB  
Article
Collective Anomalies Detection for Sensing Series of Spacecraft Telemetry with the Fusion of Probability Prediction and Markov Chain Model
by Jingyue Pang, Datong Liu, Yu Peng and Xiyuan Peng
Sensors 2019, 19(3), 722; https://doi.org/10.3390/s19030722 - 11 Feb 2019
Cited by 22 | Viewed by 4788
Abstract
Telemetry series, generally acquired from sensors, are the only basis for the ground management system to judge the working performance and health status of orbiting spacecraft. In particular, anomalies within telemetry can reflect sensor failure, transmission errors, and the major faults of the [...] Read more.
Telemetry series, generally acquired from sensors, are the only basis for the ground management system to judge the working performance and health status of orbiting spacecraft. In particular, anomalies within telemetry can reflect sensor failure, transmission errors, and the major faults of the related subsystem. Therefore, anomaly detection for telemetry series has drawn great attention from the aerospace area, where probability prediction methods, e.g., Gaussian process regression and relevance vector machine, have an inherent advantage for anomaly detection in time series with uncertainty presentation. However, labelling a single point with probability prediction faces many isolated false alarms, as well as a lower detection rate for collective anomalies that significantly limits its practical application. Simple sliding window fusion can decrease the false positives, but the support number of anomalies within the sliding window is difficult to set effectively for different series. Therefore, in this work, fused with the probability prediction-based method, the Markov chain is designed to compute the support probability of each testing series to realize the improvement on collective anomaly mode. The experiments on simulated data sets and the actual telemetry series validated the effectiveness and applicability of our proposed method. Full article
(This article belongs to the Special Issue Sensors for Prognostics and Health Management)
Show Figures

Figure 1

Figure 1
<p>Anomaly detection with the probability prediction model.</p>
Full article ">Figure 2
<p>One labelling example for the solar temperature series with the Gaussian process regression (GPR) model.</p>
Full article ">Figure 3
<p>Markov chain training for normal available data.</p>
Full article ">Figure 4
<p>The proposed anomaly detection with the fusion of probability prediction and Markov chain model.</p>
Full article ">Figure 5
<p>One example of Keogh Data and Ma Data.</p>
Full article ">Figure 6
<p>The detection results with the GPR model fused with different labelling strategies under different sizes of the sliding window.</p>
Full article ">Figure 7
<p>The detection results for the Keogh Data with three labelling strategies under the sliding window size of 10.</p>
Full article ">Figure 8
<p>The detection results for one series of the Keogh Data with the relevance vector machine (RVM) model.</p>
Full article ">Figure 9
<p>The detection results for the Keogh Data based on the RVM model with three labelling strategies under the sliding window size of 10.</p>
Full article ">Figure 10
<p>The detection results for Ma Data with the GPR model under different sliding window sizes.</p>
Full article ">Figure 11
<p>The detection results for the Ma Data based on the GPR model with three labelling strategies under the sliding window size of 10.</p>
Full article ">Figure 12
<p>The detection results for Ma Data with the GPR model under different sliding window sizes.</p>
Full article ">Figure 13
<p>The detection results for the Ma Data based on the RVM model with three labelling strategies under the sliding window size of 10.</p>
Full article ">Figure 14
<p>The detection results for the telemetry series with the GPR model.</p>
Full article ">Figure 15
<p>The battery temperature series.</p>
Full article ">Figure 16
<p>Anomaly detection with different labelling strategies.</p>
Full article ">
19 pages, 2275 KiB  
Article
Household Power Demand Prediction Using Evolutionary Ensemble Neural Network Pool with Multiple Network Structures
by Songpu Ai, Antorweep Chakravorty and Chunming Rong
Sensors 2019, 19(3), 721; https://doi.org/10.3390/s19030721 - 10 Feb 2019
Cited by 42 | Viewed by 5386
Abstract
The progress of technology on energy and IoT fields has led to an increasingly complicated electric environment in low-voltage local microgrid, along with the extensions of electric vehicle, micro-generation, and local storage. It is required to establish a home energy management system (HEMS) [...] Read more.
The progress of technology on energy and IoT fields has led to an increasingly complicated electric environment in low-voltage local microgrid, along with the extensions of electric vehicle, micro-generation, and local storage. It is required to establish a home energy management system (HEMS) to efficiently integrate and manage household energy micro-generation, consumption and storage, in order to realize decentralized local energy systems at the community level. Domestic power demand prediction is of great importance for establishing HEMS on realizing load balancing as well as other smart energy solutions with the support of IoT techniques. Artificial neural networks with various network types (e.g., DNN, LSTM/GRU based RNN) and other configurations are widely utilized on energy predictions. However, the selection of network configuration for each research is generally a case by case study achieved through empirical or enumerative approaches. Moreover, the commonly utilized network initialization methods assign parameter values based on random numbers, which cause diversity on model performance, including learning efficiency, forecast accuracy, etc. In this paper, an evolutionary ensemble neural network pool (EENNP) method is proposed to achieve a population of well-performing networks with proper combinations of configuration and initialization automatically. In the experimental study, power demand predictions of multiple households are explored in three application scenarios: optimizing potential network configuration set, forecasting single household power demand, and refilling missing data. The impacts of evolutionary parameters on model performance are investigated. The experimental results illustrate that the proposed method achieves better solutions on the considered scenarios. The optimized potential network configuration set using EENNP achieves a similar result to manual optimization. The results of household demand prediction and missing data refilling perform better than the naïve and simple predictors. Full article
Show Figures

Figure 1

Figure 1
<p>The architecture of special units in RNN: (<b>a</b>) a LSTM unit; (<b>b</b>) a GRU.</p>
Full article ">Figure 2
<p>An overview of the proposed method.</p>
Full article ">Figure 3
<p>Workflow of the evolutionary ensemble method.</p>
Full article ">Figure 4
<p>The distribution of the interval between adjacent rows (<math display="inline"><semantics> <mrow> <mrow> <mo>[</mo> <mrow> <mn>9.9</mn> <mo>,</mo> <mtext> </mtext> <mn>10.1</mn> </mrow> <mo>]</mo> </mrow> </mrow> </semantics></math>).</p>
Full article ">Figure 5
<p>Power demand of a day.</p>
Full article ">Figure 6
<p>The performance of 500 random initialized networks which are randomly generated using the potential network configuration set: (<b>a</b>) Distribution of training stopping epochs; (<b>b</b>) Distribution of performance of the networks.</p>
Full article ">Figure 7
<p>The performance of EENNP methods using different potential network configuration set: (<b>a</b>) The performance of two runs of EENNPs with the optimized potential network configuration set and two networks within the final survival population; (<b>b</b>) The performance of four runs of EENNPs in which two uses the optimized potential network configuration set and the other two uses the unoptimized set; (<b>c</b>) The comparation between the four EENNPs in detail.</p>
Full article ">Figure 8
<p>Performance of EENNP on different households.</p>
Full article ">Figure 9
<p>The impact of evolutionary parameters on model performance.</p>
Full article ">
35 pages, 1306 KiB  
Article
Energy/Area-Efficient Scalar Multiplication with Binary Edwards Curves for the IoT
by Carlos Andres Lara-Nino, Arturo Diaz-Perez and Miguel Morales-Sandoval
Sensors 2019, 19(3), 720; https://doi.org/10.3390/s19030720 - 10 Feb 2019
Cited by 9 | Viewed by 4900
Abstract
Making Elliptic Curve Cryptography (ECC) available for the Internet of Things (IoT) and related technologies is a recent topic of interest. Modern IoT applications transfer sensitive information which needs to be protected. This is a difficult task due to the processing power and [...] Read more.
Making Elliptic Curve Cryptography (ECC) available for the Internet of Things (IoT) and related technologies is a recent topic of interest. Modern IoT applications transfer sensitive information which needs to be protected. This is a difficult task due to the processing power and memory availability constraints of the physical devices. ECC mainly relies on scalar multiplication (kP)—which is an operation-intensive procedure. The broad majority of kP proposals in the literature focus on performance improvements and often overlook the energy footprint of the solution. Some IoT technologies—Wireless Sensor Networks (WSN) in particular—are critically sensitive in that regard. In this paper we explore energy-oriented improvements applied to a low-area scalar multiplication architecture for Binary Edwards Curves (BEC)—selected given their efficiency. The design and implementation costs for each of these energy-oriented techniques—in hardware—are reported. We propose an evaluation method for measuring the effectiveness of these optimizations. Under this novel approach, the energy-reducing techniques explored in this work contribute to achieving the scalar multiplication architecture with the most efficient area/energy trade-offs in the literature, to the best of our knowledge. Full article
Show Figures

Figure 1

Figure 1
<p>Low-area <span class="html-italic">kP</span> architecture, in the following referred to as C0.</p>
Full article ">Figure 2
<p>Architecture for <span class="html-italic">kP</span> featuring Itoh-Tsujii inversion (C1).</p>
Full article ">Figure 3
<p>Architecture C2 for <span class="html-italic">kP</span>, Wang inversion and a digit-multiplier are used.</p>
Full article ">Figure 4
<p>The Itoh-Tsujii inversion is paired with a digit-multiplier on this <span class="html-italic">kP</span> architecture (C3).</p>
Full article ">Figure 5
<p>Architectures for <span class="html-italic">kP</span> featuring dedicated squaring modules.</p>
Full article ">Figure 6
<p>FPGA area, power dissipation, and energy consumption for the different <span class="html-italic">kP</span> architectures.</p>
Full article ">Figure 7
<p>Percentile area and energy increments for architectures C2, C3, and C5 with reference to C0.</p>
Full article ">Figure 8
<p>Evaluation of the efficiency metric for the different <span class="html-italic">kP</span> configurations.</p>
Full article ">Figure 9
<p>Energy consumption of the <span class="html-italic">kP</span> architectures at different operational frequencies.</p>
Full article ">Figure 10
<p>Evaluation of the efficiency metric for the different <span class="html-italic">kP</span> configurations using two operational frequencies.</p>
Full article ">Figure 11
<p>Implementation results for the architectures in [<a href="#B22-sensors-19-00720" class="html-bibr">22</a>].</p>
Full article ">Figure 12
<p>Evaluation of the efficiency metric for the results from [<a href="#B22-sensors-19-00720" class="html-bibr">22</a>].</p>
Full article ">Figure 13
<p>Curve fitting for the hardware usage and energy consumption of architectures C2, C3, and C5.</p>
Full article ">Figure 14
<p>Evaluation of the efficiency metric for the C2, C3, and C5 configurations based on model data (<math display="inline"><semantics> <msub> <mrow> <mi>E</mi> <mi>F</mi> <mi>F</mi> </mrow> <mi>m</mi> </msub> </semantics></math>), compared to the evaluation based on real data (<span class="html-italic">EFF</span>).</p>
Full article ">Figure 15
<p>Evaluation of the efficiency metric for the different works in the literature, ours included.</p>
Full article ">Figure 16
<p>Efficiency scores for the different architectures in the literature. Values that are more negative represent greater energy savings overall.</p>
Full article ">
Previous Issue
Back to TopTop