Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 17, March
Previous Issue
Volume 17, January
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 17, Issue 2 (February 2017) – 210 articles

Cover Story (view full-size image): Plasmon assisted microscopy of nano-objects is a highly sensitive label-free method, which helps to detect, size, and quantify individual nano-particles. The PAMONO-sensor enables specific detection of viruses, virus-like particles and microvesicles in aquatic samples. Sensor surface functionalization with protein A/G facilitates analysis of surface markers and content of the captured biological vesicles. These features excite an interest to the PAMONO-sensor as to an analytical tool for scanning of liquid biopsy samples. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
10829 KiB  
Review
Emerging Cytokine Biosensors with Optical Detection Modalities and Nanomaterial-Enabled Signal Enhancement
by Manpreet Singh, Johnson Truong, W. Brian Reeves and Jong-in Hahm
Sensors 2017, 17(2), 428; https://doi.org/10.3390/s17020428 - 22 Feb 2017
Cited by 46 | Viewed by 10928
Abstract
Protein biomarkers, especially cytokines, play a pivotal role in the diagnosis and treatment of a wide spectrum of diseases. Therefore, a critical need for advanced cytokine sensors has been rapidly growing and will continue to expand to promote clinical testing, new biomarker development, [...] Read more.
Protein biomarkers, especially cytokines, play a pivotal role in the diagnosis and treatment of a wide spectrum of diseases. Therefore, a critical need for advanced cytokine sensors has been rapidly growing and will continue to expand to promote clinical testing, new biomarker development, and disease studies. In particular, sensors employing transduction principles of various optical modalities have emerged as the most common means of detection. In typical cytokine assays which are based on the binding affinities between the analytes of cytokines and their specific antibodies, optical schemes represent the most widely used mechanisms, with some serving as the gold standard against which all existing and new sensors are benchmarked. With recent advancements in nanoscience and nanotechnology, many of the recently emerging technologies for cytokine detection exploit various forms of nanomaterials for improved sensing capabilities. Nanomaterials have been demonstrated to exhibit exceptional optical properties unique to their reduced dimensionality. Novel sensing approaches based on the newly identified properties of nanomaterials have shown drastically improved performances in both the qualitative and quantitative analyses of cytokines. This article brings together the fundamentals in the literature that are central to different optical modalities developed for cytokine detection. Recent advancements in the applications of novel technologies are also discussed in terms of those that enable highly sensitive and multiplexed cytokine quantification spanning a wide dynamic range. For each highlighted optical technique, its current detection capabilities as well as associated challenges are discussed. Lastly, an outlook for nanomaterial-based cytokine sensors is provided from the perspective of optimizing the technologies for sensitivity and multiplexity as well as promoting widespread adaptations of the emerging optical techniques by lowering high thresholds currently present in the new approaches. Full article
(This article belongs to the Special Issue Semiconductor Materials on Biosensors Application)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The different types of protein assay schemes used commonly in solid state sensor geometry. They are label-free (or direct), directly labelled, indirectly labelled, and sandwich type of assays. Labels targeting the analyte protein can be organic (fluorophores), inorganic (semiconducting nanoparticles, quantum dots, metallic nanoparticles), biological (enzymes), or mass-increasing compounds.</p>
Full article ">Figure 2
<p>In a sandwich immunoassay involving HRP (yellow sphere)-labelled TNF-α antibody, the concentration of TNF-α analyte protein can be measured by examining the characteristic peaks for the oxidized products (indicated as blue sphere) of the TMB substrate (indicated as white sphere) whose conversion is triggered by HRP in the presence of H<sub>2</sub>O<sub>2</sub>. Reproduced with permission from Ref. [<a href="#B65-sensors-17-00428" class="html-bibr">65</a>] Copyright (2011) American Chemical Society.</p>
Full article ">Figure 3
<p>Schematics illustrating the excitation and emission processes of a fluorophore in the absence and vicinity of a metal. S<sub>0</sub> and S<sub>1</sub> represent the ground and excited electronic energy states, respectively. <span class="html-italic">E</span> denotes for the excitation energy whereas <span class="html-italic">k<sub>r</sub></span>, <span class="html-italic">k<sub>nr</sub></span>, and <span class="html-italic">k<sub>m</sub></span> are rate constants associated with the different radiative and nonradiative emission processes.</p>
Full article ">Figure 4
<p>(<b>A</b>) The distance dependence of a fluorophore in the vicinity of a metal nanoparticle on the resulting fluorescence rate is shown. The solid and dotted data are from theoretical and experimental outcomes, respectively; (<b>B</b>) The fluorescence image is acquired from a single fluorophore molecule located at 2 nm away from the metal. The dip in the center is due to fluorescence quenching; (<b>C</b>) Corresponding theoretical image for (<b>B</b>) is provided. Reproduced with permission from Ref. [<a href="#B76-sensors-17-00428" class="html-bibr">76</a>] Copyright (2006) American Physical Society.</p>
Full article ">Figure 5
<p>(<b>A</b>) Scanning electron microscope images of plasmonic Au nanoisland-covered glass beads are displayed; (<b>B</b>) Comparison of the fluorescence signal from the Cy5-avidin coated plasmonic gold beads versus glass beads in flow cytometry; (<b>C</b>) The mean Cy5 fluorescence intensity plot shows an enhancement of ~2 orders of magnitude on the plasmonic bead relative to the glass bead; (<b>D</b>) Sandwich assay schemes for IL-6 detection on a plasmonic Au bead and on a glass bead are shown; (<b>E</b>) Fluorescence data for IL-6 detection on (left) plasmonic gold beads and (right) glass beads are plotted as a function of the protein concentration. Reproduced with permission from Ref. [<a href="#B79-sensors-17-00428" class="html-bibr">79</a>] Copyright (2014) Royal Society of Chemistry.</p>
Full article ">Figure 6
<p>Normalized fluorescence signals from fluorophore-linked immunoglobulin G (IgG) deposited on various platforms are compared. In this comparison experiment, protein deposition conditions were kept identical for all cases. Biomolecular fluorescence emission on ZnO NRs platform was the highest when compared to the emission intensities measured on conventional substrates such as glass, silica, quartz, and polymers. Adapted with permission from Ref. [<a href="#B83-sensors-17-00428" class="html-bibr">83</a>] Copyright (2016) American Chemical Society.</p>
Full article ">Figure 7
<p>Spatially localized, temporally extended biomolecular fluorescence signal observed on individual ZnO NRs. (<b>A</b>,<b>B</b>) The contour map in (<b>A</b>) and the time-lapse images in (<b>B</b>) display the fluorescence intensities of fluorophore-linked IgG antibodies, DTAF-antiIgG, measured along the long axis of a 25 μm-long ZnO NR; (<b>C</b>) Differences in the time-dependent decay profile of the fluorescence intensity were clearly observed depending on the ZnO NR crystal facets. Red and black data show biomolecular fluorescence measured from the NR end and main body, respectively; (<b>D</b>) FDTD simulations were carried out to obtain the radiation patterns from a single emitter radiating at 576 nm placed 10 nm from the ZnO NR surface, with the dipole oscillation polarized along the <span class="html-italic">X</span>-(top), <span class="html-italic">Y</span>-(middle), and <span class="html-italic">Z</span>-(bottom) direction; (<b>E</b>) The dimensional effect of ZnO NRs on <span class="html-italic">FINE</span> was evaluated by simulating far-field radiation patterns of a 517 nm electric dipole. A pair of far-field patterns is shown for each NR of the specified length (L) and width (d) where the top/bottom simulation corresponds to the spatial patterns observed from the <span class="html-italic">Z</span>-/<span class="html-italic">X</span>-axis. Reproduced with permission from Ref. [<a href="#B85-sensors-17-00428" class="html-bibr">85</a>] Copyright (2014) Royal Society of Chemistry.</p>
Full article ">Figure 8
<p>(<b>A</b>) The different growth orientations of the ZnO NRs are schematically shown; (<b>B</b>) The presence of <span class="html-italic">FINE</span> was confirmed for all cases when the spatial and temporal emission behavior of fluorophore-linked IgG antibodies, 1 μg/mL TRITC-antiIgG, was monitored in time-lapse fluorescence panels, as shown for the case of a L-ZnO NR (top) and a V-ZnO NR<span class="html-italic"><sub>ef</sub></span> (bottom). The magnitude and degree of <span class="html-italic">FINE</span> were much higher for V-ZnO NRs relative to L-ZnO NRs; (<b>C</b>) ZnO NRs of various lengths and widths were analyzed after treating them with IgG antibodies coupled to a different fluorophore, 5 μg/mL of DTAF-antiIgG. In all cases, the difference in the normalized fluorescence intensity measured at the NR end versus NR side facets, Δ<span class="html-italic">I</span> = <span class="html-italic">I</span><sub>avg,NR<span class="html-italic">ef</span></sub> − <span class="html-italic">I</span><sub>avg,NR<span class="html-italic">sf</span></sub>, indicated that the degree of <span class="html-italic">FINE</span> increased as the NR length became longer. The NR width did not lead to a significant effect. Reproduced with permission from Ref. [<a href="#B86-sensors-17-00428" class="html-bibr">86</a>] Copyright (2015) Royal Society of Chemistry.</p>
Full article ">Figure 9
<p>(<b>A</b>) The 3D bar graphs display the TNF-α concentrations in ICU patient urine samples measured by the ELISA- and ZnO NRs-based platforms for comparison. The grey regions in the ELISA row correspond to missing concentration data, indicating that the TNF-α levels in the samples were below the ELISA DL of 5.5 pg/mL. In contrast, ZnO NRs were able to measure the TNF-α concentrations of all 46 patients. The magnifier signs inserted in the ZnO NRs row correspond to the patients that belong to the grey area of the ELISA-based assay, and the bar graphs of these patients are shown separately in (<b>B</b>,<b>C</b>) for clarity; (<b>B</b>,<b>C</b>) The zoomed-in 3D bar graphs are the missing TNF-α concentrations that were revealed by the ZnO NRs-based assay. The upper limits of the vertical ranges in (<b>B</b>,<b>C</b>) are adjusted to 2 pg/mL and 350 fg/mL, respectively, in order to show the variations in the TNF-α concentrations between patients more clearly. The truncated bars indicate that their TNF-α concentrations exceed the upper limit of the 3D graph. Reproduced with permission from Ref. [<a href="#B87-sensors-17-00428" class="html-bibr">87</a>] Copyright (2016) Royal Society of Chemistry.</p>
Full article ">Figure 10
<p>(<b>A</b>) The layout of the SiO<sub>2</sub>/TiO<sub>2</sub> PC platform fabricated using a nanoreplica molding process; (<b>B</b>) Fluorescence data obtained from an identical immunoassay performed using a PC and a glass slide. The fluorescence images and the corresponding intensity profiles show that the SNR of the fluorescence assay is about 8 times higher than the ratio from the glass slide assay; (<b>C</b>) Net fluorescence intensity as a function of TNF-α concentration for the immunoassay performed on the PC on-resonance, the PC off-resonance, and the glass slide; (<b>D</b>) The lower concentration data in (<b>C</b>) are magnified to clearly display the results of the three lowest assay concentrations. Reproduced with permission from Ref. [<a href="#B91-sensors-17-00428" class="html-bibr">91</a>] Copyright (2008) American Chemical Society.</p>
Full article ">Figure 11
<p>(<b>A</b>) Schematic illustration depicts the overall fluorescence assay for a target protein using double stranded aptamers on RGO (DAGO) nanosheets; (<b>B</b>) The fluorescence spectrum is obtained after incubating 1 μg/mL of IFN-γ with 10 μg/mL DAGO for 3 min; (<b>C</b>,<b>D</b>) IFN-γ assay results obtained by using DAGO (<b>C</b>) and an IFN-γ ELISA kit (<b>D</b>) are displayed. The assays were conducted using the same normal and HIV positive patient serum samples for comparison between the two methods. Reproduced with permission from Ref. [<a href="#B99-sensors-17-00428" class="html-bibr">99</a>] Copyright (2014) Elsevier.</p>
Full article ">Figure 12
<p>Comparison spectra of QDs and organic fluorescent dyes are displayed. Data shown as lines and symbols represent absorption and emission spectra, respectively. (<b>A</b>–<b>C</b>) QD spectra. Representative absorption and emission spectra are shown for the QDs of (<b>A</b>) CdSe, (<b>B</b>) CdTe, and (<b>C</b>) InP. (<b>D</b>–<b>F</b>) Organic dye spectra. Typical absorption and emission spectra for model organic dyes are displayed for (<b>D</b>) Cy3 and Cy5, (<b>E</b>) Megastokes and (<b>F</b>) Nile Red. Different colors in the spectra are coded by the size of the dye (blue &lt; green &lt; black &lt; red). Reproduced with permission from Ref. [<a href="#B104-sensors-17-00428" class="html-bibr">104</a>] Copyright (2008) Nature Publishing Group.</p>
Full article ">Figure 13
<p>(<b>A</b>) An illustration of a localized surface plasmon around a metal nanoparticle; (<b>B</b>) Typical examples of surface plasmon peaks appearing in the visible range for the metals of Ag, Au, and Cu.</p>
Full article ">Figure 14
<p>(<b>A</b>) The schematics display grating-, prism-, waveguide-based light coupling configurations of SPR. Reproduced with permission from Ref. [<a href="#B113-sensors-17-00428" class="html-bibr">113</a>] Copyright (2016) MDPI; (<b>B</b>,<b>C</b>) The illustrations show (<b>B</b>) a prism-based SPR sensor and its signal typically measured in terms of the change in the resonance angle and (<b>C</b>) a LSPR sensor detecting shifts in a spectral peak. Reproduced with permission from Ref. [<a href="#B114-sensors-17-00428" class="html-bibr">114</a>] Copyright (2014) Royal Society of Chemistry.</p>
Full article ">Figure 15
<p>LSPR-based cytokine detection in healthy versus patient serum samples. (<b>A</b>) Dark-field images of AuNR microarrays within a single microfluidic detection channel loaded with different sample mixtures of recombinant cytokines (500 pg/mL for each cytokine) spiked in a serum matrix; (<b>B</b>) Cytokine concentrations are quantified for the samples in (<b>A</b>). The dashed black line represents the predetermined value of 500 pg/mL. The dashed grey line represents the DL of the LSPR microarray; (<b>C</b>) Correlations are made between the data obtained from the LSPR microarray assay and the gold standard ELISA for the spiked-in serum samples; (<b>D</b>) Five-day cytokine concentration variations were measured by the LSPR microarray assay for serum samples extracted from two post-cardiopulmonary bypass (CPB)-surgery pediatric patients. Reproduced with permission from Ref. [<a href="#B41-sensors-17-00428" class="html-bibr">41</a>] Copyright (2015) American Chemical Society.</p>
Full article ">Figure 16
<p>The schematics represent a FOPPR sensor setup and its working principle. The setup consists of a function generator, a light-emitting diode, a FOPPR sensing chip, a photodiode, a lock-in amplifier, and a computer, as indicated in A through F, respectively. Biomolecular binding at the surface of the functionalized AuNP layer (indicated in yellow) results in increased absorbance and decreased light intensity exiting the optical fiber (shown in green). Reproduced with permission from Ref. [<a href="#B122-sensors-17-00428" class="html-bibr">122</a>] Copyright (2010) Elsevier.</p>
Full article ">Figure 17
<p>(<b>A</b>) Various geometries of microcavity resonators; (<b>B</b>) The original and modified optical waves in resonance are shown for the cases of before and after the binding of a bioanalyte on the sensor surface. The resonance wavelength change associated with this event is recorded in the transmission spectrum as Δ<span class="html-italic">λ</span>.</p>
Full article ">Figure 18
<p>(<b>A</b>) 1D, 2D, and 3D PC systems are shown with the different colors corresponding to materials of different refractive indices; (<b>B</b>) Upon analyte binding, the resonant wavelength of the reflected light changes its wavelength from the original value by Δ<span class="html-italic">λ</span>.</p>
Full article ">Figure 19
<p>(<b>A</b>) Scanning electron microscopy images of a silicon microring resonator chip are shown. The chip consisted of an array of 30 μm-sized rings from which 32 rings were used for simultaneous signal monitoring. Each of the 30 μm ring resonator was accessed by a separate linear waveguide and interfaced with microfluidic channels for sample delivery; (<b>B</b>) The microrings were functionalized with capture antibodies specific for various cytokine targets. Following incubation of samples with a cocktail of secondary antibodies, the one-step sandwich immunoassay was monitored in real time for each ring. Cytokines bind specifically to their capture antibodies in a complex with the cognate detection antibody, thus enhancing the signal; (<b>C</b>) Multiple cytokines at varying concentrations were simultaneously quantified based on the initial slope (Δpm/min) of the sensor response upon sample introduction. The plot displays data from one representative ring for each cytokine assessed. Reproduced with permission from Ref. [<a href="#B140-sensors-17-00428" class="html-bibr">140</a>] Copyright (2011) American Chemical Society.</p>
Full article ">Figure 20
<p>(<b>A</b>) Various structures of macroscopic and bulk waveguides are depicted; (<b>B</b>–<b>D</b>) 1D nanomaterial-based, subwavelength waveguides are displayed. (<b>B</b>) The waveguides are fabricated from using the optical cavities of a GaN NW, SnO<sub>2</sub> nanoribbon, and ZnO NW with diameters between 130–250 nm. Reproduced with permission from Ref. [<a href="#B142-sensors-17-00428" class="html-bibr">142</a>] Copyright (2005) Proceedings of the National Academy of Sciences USA; (<b>C</b>,<b>D</b>) The conical emission of a low-order guided mode from a ZnO NR is shown in (<b>C</b>). The emission angle is approximately 90° (dotted lines). The image result in (<b>D</b>) is from a 2D finite difference time domain (FDTD) calculation of the square of the electric field of a light pulse emitted from a ZnO NR of diameter <span class="html-italic">d</span>. Reproduced with permission from Ref. [<a href="#B143-sensors-17-00428" class="html-bibr">143</a>] Copyright (2007) American Chemical Society.</p>
Full article ">Figure 21
<p>Illustration of a standing wave pattern formed near the interface between a waveguide with a higher refractive index and a medium with a lower refractive index (<span class="html-italic">n<sub>waveguide</sub></span> &gt; <span class="html-italic">n<sub>medium</sub></span>) and the exponentially decaying evanescent wave with the penetration depth of <span class="html-italic">d<sub>p</sub></span>.</p>
Full article ">Figure 22
<p>(<b>A</b>) A diagram of the CTFOB measurement setup is shown. LD, BF, ND, SF, FL, LF, and FCL denote the laser diode, band pass filter, neutral density filter, short pass filter, focusing lens, long pass filter, and focusing-collecting lens, respectively; (<b>B</b>) A dose-response curve is generated from the average fluorescence signal of various IL-6 concentrations. Reproduced with permission from Ref. [<a href="#B151-sensors-17-00428" class="html-bibr">151</a>] Copyright (2009) Elsevier.</p>
Full article ">Figure 23
<p>(<b>A</b>) Schematics of IRIS showing the interference of light reflected from the reference plane of the Si-SiO<sub>2</sub> interface with the light from the top surface. Added biomass results in the wavelength-dependent reflectivity, as indicated with a double-headed arrow; (<b>B</b>) The overall layout for the optical setup in IRIS including a x-cube (XC) used to combine the beams of the different light emitting diodes (LEDs), a beam splitter (BS), and a charged coupled device (CCD) detector. Panel (<b>B</b>) is reproduced with permission from Ref. [<a href="#B154-sensors-17-00428" class="html-bibr">154</a>] Copyright (2010) American Chemical Society.</p>
Full article ">
2777 KiB  
Article
Dual MIMU Pedestrian Navigation by Inequality Constraint Kalman Filtering
by Wei Shi, Yang Wang and Yuanxin Wu
Sensors 2017, 17(2), 427; https://doi.org/10.3390/s17020427 - 22 Feb 2017
Cited by 34 | Viewed by 5404
Abstract
The foot-mounted inertial navigation system is an important method of pedestrian navigation as it, in principle, does not rely any external assistance. A real-time range decomposition constraint method is proposed in this paper to combine the information of dual foot-mounted inertial navigation systems. [...] Read more.
The foot-mounted inertial navigation system is an important method of pedestrian navigation as it, in principle, does not rely any external assistance. A real-time range decomposition constraint method is proposed in this paper to combine the information of dual foot-mounted inertial navigation systems. It is well known that low-cost inertial pedestrian navigation aided with both ZUPT (zero velocity update) and the range decomposition constraint performs better than those in their own respective methods. This paper recommends that the separation distance between the position estimates of the two foot-mounted inertial navigation systems be restricted by an ellipsoidal constraint that relates to the maximum step length and the leg height. The performance of the proposed method is studied by utilizing experimental data, and the results indicate that the method can effectively correct the dual navigation systems’ position over the traditional spherical constraint. Full article
(This article belongs to the Special Issue System-Integrated Intelligence and Intelligent Systems)
Show Figures

Figure 1

Figure 1
<p>(<b>Left</b>) The two MIMU are mounted to the feet separately, the <span class="html-italic">OX<sub>b</sub>Y<sub>b</sub>Z<sub>b</sub></span> coordinate system are the carrier coordinates; (<b>right</b>) side view, the <span class="html-italic">OX<sub>n</sub>Y<sub>n</sub>Z<sub>n</sub></span> coordinate system are the navigation coordinates.</p>
Full article ">Figure 2
<p>The ellipsoid constraint calibration diagram (<math display="inline"> <semantics> <mrow> <mi>h</mi> <mo>&lt;</mo> <mi>γ</mi> </mrow> </semantics> </math>).</p>
Full article ">Figure 3
<p>Level constraint for one foot: a circle of radius <math display="inline"> <semantics> <mi>γ</mi> </semantics> </math>, centered at the other foot.</p>
Full article ">Figure 4
<p>Pedestrian walking a closed path in a corridor outside the laboratory. (<b>a</b>) feet trajectories without constraint; (<b>b</b>) feet trajectories with spherical constraint; (<b>c</b>) feet trajectories with ellipsoidal constraint.</p>
Full article ">Figure 5
<p>Left and right feet altitude difference in 2D the closed path experiment. (<b>a</b>) the height difference of the two feet without contraint; (<b>b</b>) the height difference of the two feet with spherical constraint; (<b>c</b>) the height difference of the two feet with ellipsoidal constraint.</p>
Full article ">Figure 5 Cont.
<p>Left and right feet altitude difference in 2D the closed path experiment. (<b>a</b>) the height difference of the two feet without contraint; (<b>b</b>) the height difference of the two feet with spherical constraint; (<b>c</b>) the height difference of the two feet with ellipsoidal constraint.</p>
Full article ">Figure 6
<p>The trajectories of the feet in upstairs experiment. (<b>a</b>) feet trajectories without constraint; (<b>b</b>) feet trajectories with spherical constraint; (<b>c</b>) feet trajectories with ellipsoidal constraint.</p>
Full article ">Figure 6 Cont.
<p>The trajectories of the feet in upstairs experiment. (<b>a</b>) feet trajectories without constraint; (<b>b</b>) feet trajectories with spherical constraint; (<b>c</b>) feet trajectories with ellipsoidal constraint.</p>
Full article ">Figure 7
<p>The left and right feet position altitude difference in the upstairs experiment. (<b>a</b>) the height difference of the two feet without constraint; (<b>b</b>) the height difference of the two feet with spherical constraint; (<b>c</b>) the height difference of the two feet with ellipsoidal constraint.</p>
Full article ">
19228 KiB  
Article
The Impact of 3D Stacking and Technology Scaling on the Power and Area of Stereo Matching Processors
by Seung-Ho Ok, Yong-Hwan Lee, Jae Hoon Shim, Sung Kyu Lim and Byungin Moon
Sensors 2017, 17(2), 426; https://doi.org/10.3390/s17020426 - 22 Feb 2017
Cited by 2 | Viewed by 5759
Abstract
Recently, stereo matching processors have been adopted in real-time embedded systems such as intelligent robots and autonomous vehicles, which require minimal hardware resources and low power consumption. Meanwhile, thanks to the through-silicon via (TSV), three-dimensional (3D) stacking technology has emerged as a practical [...] Read more.
Recently, stereo matching processors have been adopted in real-time embedded systems such as intelligent robots and autonomous vehicles, which require minimal hardware resources and low power consumption. Meanwhile, thanks to the through-silicon via (TSV), three-dimensional (3D) stacking technology has emerged as a practical solution to achieving the desired requirements of a high-performance circuit. In this paper, we present the benefits of 3D stacking and process technology scaling on stereo matching processors. We implemented 2-tier 3D-stacked stereo matching processors with GlobalFoundries 130-nm and Nangate 45-nm process design kits and compare them with their two-dimensional (2D) counterparts to identify comprehensive design benefits. In addition, we examine the findings from various analyses to identify the power benefits of 3D-stacked integrated circuit (IC) and device technology advancements. From experiments, we observe that the proposed 3D-stacked ICs, compared to their 2D IC counterparts, obtain 43% area, 13% power, and 14% wire length reductions. In addition, we present a logic partitioning method suitable for a pipeline-based hardware architecture that minimizes the use of TSVs. Full article
(This article belongs to the Special Issue Advances on Resources Management for Multi-Platform Infrastructures)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Left image (R<sub>win</sub>: reference window); (<b>b</b>) right image (d<sub>x</sub>: disparity range, C<sub>win</sub>: candidate window); (<b>c</b>) dissimilarity between R<sub>win</sub> and C<sub>win</sub>; and (<b>d</b>) a depth map.</p>
Full article ">Figure 2
<p>Flow diagram of the stereo matching processor.</p>
Full article ">Figure 3
<p>Illustration of the multiple-read, single-write operation of the stereo matching algorithm.</p>
Full article ">Figure 4
<p>Pipelined hardware architecture of our stereo matching processor.</p>
Full article ">Figure 5
<p>Via-first bonding technology used in this paper: (<b>a</b>) Side view of via-first TSVs; and (<b>b</b>) top-down view of TSVs.</p>
Full article ">Figure 6
<p>2D and 3D IC design flow.</p>
Full article ">Figure 7
<p>(<b>a</b>) The conventional macro-level partitioning method; and (<b>b</b>) the proposed pipeline-level partitioning method.</p>
Full article ">Figure 8
<p>An illustration of the proposed pipeline-level partitioning method: (<b>a</b>) Split the pipeline stages into two tiers, and (<b>b</b>) adjust the number of SRAMs in each tier.</p>
Full article ">Figure 9
<p>Overall flow of the power and timing analyses for a 3D IC.</p>
Full article ">Figure 10
<p>Comparisons between the normalized designs of 2D and 3D ICs: (<b>a</b>) 2D and 3D ICs designed in 130-nm process technology; and (<b>b</b>) 2D and 3D ICs designed in 45-nm process technology.</p>
Full article ">Figure 11
<p>Layout snapshots of 2D and 3D ICs designed in 130-nm process technology: (<b>a</b>) 2D IC (2D-130); (<b>b</b>) the top and bottom tiers of a 3D IC using macro-level partitioning (3D-MP-130); and (<b>c</b>) the top and bottom tiers of a 3D IC using pipeline-level partitioning (3D-PP-130).</p>
Full article ">Figure 12
<p>Layout snapshots of 2D and 3D ICs designed in 45-nm process technology: (<b>a</b>) 2D IC (2D-45); (<b>b</b>) the top and bottom tiers of a 3D IC using macro-level partitioning (3D-MP-45); and (<b>c</b>) the top and bottom tiers of a 3D IC using pipeline-level partitioning (3D-PP-45).</p>
Full article ">Figure 13
<p>Normalized power comparisons of 2D and 3D ICs: (<b>a</b>) 130-nm process technology and (<b>b</b>) 45-nm process technology.</p>
Full article ">Figure 14
<p>Normalized power comparisons of 2D and 3D ICs: (<b>a</b>) 130-nm process technology and (<b>b</b>) 45-nm process technology.</p>
Full article ">Figure 15
<p>Comparisons of the normalized power of 2D and 3D ICs as a function of switching activity: (<b>a</b>) Total power; (<b>b</b>) net switching power; (<b>c</b>) cell internal power; (<b>d</b>) cell leakage power. Note that the power consumption of 2D-130 actually increases as the switching activity increases.</p>
Full article ">Figure 16
<p>Comparisons of the normalized power of 2D and 3D ICs as a function of switching activity: (<b>a</b>) Total power; (<b>b</b>) net switching power; (<b>c</b>) cell internal power; (<b>d</b>) cell leakage power. Note that the power consumption of 2D-45 actually increases as the switching activity increases.</p>
Full article ">
3723 KiB  
Article
A New Deep Learning Model for Fault Diagnosis with Good Anti-Noise and Domain Adaptation Ability on Raw Vibration Signals
by Wei Zhang, Gaoliang Peng, Chuanhao Li, Yuanhang Chen and Zhujun Zhang
Sensors 2017, 17(2), 425; https://doi.org/10.3390/s17020425 - 22 Feb 2017
Cited by 1281 | Viewed by 28550
Abstract
Intelligent fault diagnosis techniques have replaced time-consuming and unreliable human analysis, increasing the efficiency of fault diagnosis. Deep learning models can improve the accuracy of intelligent fault diagnosis with the help of their multilayer nonlinear mapping ability. This paper proposes a novel method [...] Read more.
Intelligent fault diagnosis techniques have replaced time-consuming and unreliable human analysis, increasing the efficiency of fault diagnosis. Deep learning models can improve the accuracy of intelligent fault diagnosis with the help of their multilayer nonlinear mapping ability. This paper proposes a novel method named Deep Convolutional Neural Networks with Wide First-layer Kernels (WDCNN). The proposed method uses raw vibration signals as input (data augmentation is used to generate more inputs), and uses the wide kernels in the first convolutional layer for extracting features and suppressing high frequency noise. Small convolutional kernels in the preceding layers are used for multilayer nonlinear mapping. AdaBN is implemented to improve the domain adaptation ability of the model. The proposed model addresses the problem that currently, the accuracy of CNN applied to fault diagnosis is not very high. WDCNN can not only achieve 100% classification accuracy on normal signals, but also outperform the state-of-the-art DNN model which is based on frequency features under different working load and noisy environment conditions. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Three intelligent fault diagnosis frameworks: (<b>a</b>) traditional method; (<b>b</b>) features extracted by unsupervised learning [<a href="#B27-sensors-17-00425" class="html-bibr">27</a>]; (<b>c</b>) the proposed method.</p>
Full article ">Figure 2
<p>The overall framework of proposed WDCNN with AdaBN domain adaptation.</p>
Full article ">Figure 3
<p>Architecture of the proposed WDCNN model.</p>
Full article ">Figure 4
<p>Domain adaptation framework for WDCNN.</p>
Full article ">Figure 5
<p>Data augmentation with overlap.</p>
Full article ">Figure 6
<p>Motor driving mechanical system used by CWRU.</p>
Full article ">Figure 7
<p>Diagnosis results using different numbers of training samples.</p>
Full article ">Figure 8
<p>Feature visualization via t-SNE: last hidden fully-connected layer representation for the test samples in WDCNN trained by different numbers of training samples: (<b>a</b>) 90 training samples; (<b>b</b>) 300 training samples; (<b>c</b>) 3000 training samples and (<b>d</b>) 19,800 training samples.</p>
Full article ">Figure 9
<p>Results of the proposed WDCNN and WDCNN (AdaBN) of six domain shifts on the Datasets A, B and C, compared with FFT-SVM, FFT-MLP and FFT-DNN.</p>
Full article ">Figure 10
<p>Feature visualization via t-SNE: last hidden fully-connected layer representation of WDCNN for (<b>a</b>) test set C and B before AdaBN, and (<b>b</b>) test C and B after AdaBN.</p>
Full article ">Figure 11
<p>Figures for original signal of inner race fault, the additive white Gaussian noise, and the composite noisy signal with SNR = 0 dB respectively.</p>
Full article ">Figure 12
<p>Comparison of classification accuracy under different noisy environment.</p>
Full article ">Figure 13
<p>Visualization of convolutional kernels learned by (<b>a</b>) WDCNN and their (<b>b</b>) frequency-domain representation.</p>
Full article ">Figure 14
<p>Visualization of the activations from the first convolutional layer with 10 kinds of fault signals as input. Red represents an activation of maximum, while blue means the neuron is not activated.</p>
Full article ">Figure 15
<p>Visualization of all convolutional neuron activations in WDCNN for (<b>a</b>) a segment of normal vibration signal and (<b>b</b>) a segment of fault signal (inner race fault with 0.014-inch fault diameter). Red represents an activation of maximum, while blue means the neuron is not activated.</p>
Full article ">Figure 16
<p>Visualization of the feature distribution of all the test samples with no noise extracted from each convolutional layers and the last fully-connected layer via t-SNE method.</p>
Full article ">
6744 KiB  
Article
Experimental Validation of Depth Cameras for the Parameterization of Functional Balance of Patients in Clinical Tests
by Francisco-Ángel Moreno, José Antonio Merchán-Baeza, Manuel González-Sánchez, Javier González-Jiménez and Antonio I. Cuesta-Vargas
Sensors 2017, 17(2), 424; https://doi.org/10.3390/s17020424 - 22 Feb 2017
Cited by 12 | Viewed by 6023
Abstract
In clinical practice, patients’ balance can be assessed using standard scales. Two of the most validated clinical tests for measuring balance are the Timed Up and Go (TUG) test and the MultiDirectional Reach Test (MDRT). Nowadays, inertial sensors (IS) are employed for kinematic [...] Read more.
In clinical practice, patients’ balance can be assessed using standard scales. Two of the most validated clinical tests for measuring balance are the Timed Up and Go (TUG) test and the MultiDirectional Reach Test (MDRT). Nowadays, inertial sensors (IS) are employed for kinematic analysis of functional tests in the clinical setting, and have become an alternative to expensive, 3D optical motion capture systems. In daily clinical practice, however, IS-based setups are yet cumbersome and inconvenient to apply. Current depth cameras have the potential for such application, presenting many advantages as, for instance, being portable, low-cost and minimally-invasive. This paper aims at experimentally validating to what extent this technology can substitute IS for the parameterization and kinematic analysis of the TUG and the MDRT tests. Twenty healthy young adults were recruited as participants to perform five different balance tests while kinematic data from their movements were measured by both a depth camera and an inertial sensor placed on their trunk. The reliability of the camera’s measurements is examined through the Interclass Correlation Coefficient (ICC), whilst the Pearson Correlation Coefficient (r) is computed to evaluate the correlation between both sensor’s measurements, revealing excellent reliability and strong correlations in most cases. Full article
(This article belongs to the Collection Sensors for Globalized Healthy Living and Wellbeing)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Axes orientation of the inertial sensor and (<b>b</b>) its position on the participant’s body. (<b>c</b>) PrimeSense’s RGB-D camera and its main characteristics.</p>
Full article ">Figure 2
<p>Experiment setup for the (<b>a</b>) MDRTs and the (<b>b</b>) TUG test. (<b>c</b>) Joints detected with the depth camera.</p>
Full article ">Figure 3
<p>Skeleton’s movement detected in the MDRTs and the TUG test.</p>
Full article ">Figure 4
<p>Results for the forward MDRT. (<b>a</b>) Comparison of the measured displacement provided by the depth camera (solid blue) and the inertial sensor (dashed red) for a representative participant. The positions of the defined control points are also shown; (<b>b</b>–<b>d</b>) mean time (with CI 95%), mean angular displacement (with CI 95%) and mean angular velocity (with CI 95%), respectively, over all the participants for the intervals defined by the control points; (<b>e</b>) Pearson Correlation Coefficient (r) (p &lt; 0.005) between the devices for all the measured variables. (Time is expressed in seconds, angular displacement in degrees, and angular velocity in degrees per second).</p>
Full article ">Figure 5
<p>Results for the leftward MDRT. (<b>a</b>) Comparison of the measured displacement provided by the depth camera (solid blue) and the inertial sensor (dashed red) for a representative participant. The positions of the defined control points are also shown; (<b>b</b>–<b>d</b>) mean time (with CI 95%), mean angular displacement (with CI 95%) and mean angular velocity (with CI 95%), respectively, over all the participants for the intervals defined by the control points; (<b>e</b>) Pearson Correlation Coefficient (r) (p &lt; 0.005) between the devices for all the measured variables. (Time is expressed in seconds, angular displacement in degrees, and angular velocity in degrees per second).</p>
Full article ">Figure 6
<p>Results for the rightward MDRT. (<b>a</b>) Comparison of the measured displacement provided by the depth camera (solid blue) and the inertial sensor (dashed red) for a representative participant. The positions of the defined control points are also shown; (<b>b</b>–<b>d</b>) mean time (with CI 95%), mean angular displacement (with CI 95%) and mean angular velocity (with CI 95%), respectively, over all the participants for the intervals defined by the control points; (<b>e</b>) Pearson Correlation Coefficient (r) (p &lt; 0.005) between the devices for all the measured variables. (Time is expressed in seconds, angular displacement in degrees, and angular velocity in degrees per second).</p>
Full article ">Figure 7
<p>Results for the backward MDRT. (<b>a</b>) Comparison of the measured displacement provided by the depth camera (solid blue) and the inertial sensor (dashed red) for a representative participant. The positions of the defined control points are also shown; (<b>b</b>,<b>c</b>,<b>d</b>) mean time (with CI 95%), mean angular displacement (with CI 95%) and mean angular velocity (with CI 95%), respectively, over all the participants for the intervals defined by the control points; (<b>e</b>) Pearson Correlation Coefficient (r) (p &lt; 0.005) between the devices for all the measured variables. (Time is expressed in seconds, angular displacement in degrees, and angular velocity in degrees per second).</p>
Full article ">Figure 8
<p>Results for the TUG test. (<b>a</b>) Comparison of the measured displacement provided by the depth camera (solid blue) and the inertial sensor (dashed red) for a representative participant. The positions of the defined control points are also shown; (<b>b</b>–<b>d</b>) mean time (with CI 95%), mean angular displacement (with CI 95%) and mean angular velocity (with CI 95%), respectively, over all the participants for the intervals defined by the control points; (<b>e</b>) Pearson Correlation Coefficient (r) (p &lt; 0.005) between the devices for all the measured variables. (Time is expressed in seconds, angular displacement in degrees, and angular velocity in degrees per second).</p>
Full article ">
1751 KiB  
Article
A Temperature-Dependent Battery Model for Wireless Sensor Networks
by Leonardo M. Rodrigues, Carlos Montez, Ricardo Moraes, Paulo Portugal and Francisco Vasques
Sensors 2017, 17(2), 422; https://doi.org/10.3390/s17020422 - 22 Feb 2017
Cited by 40 | Viewed by 7469
Abstract
Energy consumption is a major issue in Wireless Sensor Networks (WSNs), as nodes are powered by chemical batteries with an upper bounded lifetime. Estimating the lifetime of batteries is a difficult task, as it depends on several factors, such as operating temperatures and [...] Read more.
Energy consumption is a major issue in Wireless Sensor Networks (WSNs), as nodes are powered by chemical batteries with an upper bounded lifetime. Estimating the lifetime of batteries is a difficult task, as it depends on several factors, such as operating temperatures and discharge rates. Analytical battery models can be used for estimating both the battery lifetime and the voltage behavior over time. Still, available models usually do not consider the impact of operating temperatures on the battery behavior. The target of this work is to extend the widely-used Kinetic Battery Model (KiBaM) to include the effect of temperature on the battery behavior. The proposed Temperature-Dependent KiBaM (T-KiBaM) is able to handle operating temperatures, providing better estimates for the battery lifetime and voltage behavior. The performed experimental validation shows that T-KiBaM achieves an average accuracy error smaller than 0.33%, when estimating the lifetime of Ni-MH batteries for different temperature conditions. In addition, T-KiBaM significantly improves the original KiBaM voltage model. The proposed model can be easily adapted to handle other battery technologies, enabling the consideration of different WSN deployments. Full article
(This article belongs to the Special Issue Wireless Rechargeable Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>The structure of a Ni-MH battery [<a href="#B29-sensors-17-00422" class="html-bibr">29</a>].</p>
Full article ">Figure 2
<p>Kinetic Battery Model (KiBaM) [<a href="#B18-sensors-17-00422" class="html-bibr">18</a>].</p>
Full article ">Figure 3
<p>An example of KiBaM behavior over time.</p>
Full article ">Figure 4
<p>Test-bed used for the experimental assessments.</p>
Full article ">Figure 5
<p>Connectivity scheme of the test-bed system.</p>
Full article ">Figure 6
<p>Experimental results at different temperatures.</p>
Full article ">Figure 7
<p>The battery-specific behavior. (<b>a</b>) Experimental results; (<b>b</b>) Curve fitting.</p>
Full article ">Figure 8
<p>Experimental vs. analytical results.</p>
Full article ">Figure 9
<p>Experimental vs. T-KiBaM analytical results at different temperatures.</p>
Full article ">Figure 10
<p>Experimental vs. T-KiBaM analytical comparison at different temperatures.</p>
Full article ">Figure 11
<p>Voltage level tracking comparison.</p>
Full article ">
2172 KiB  
Article
Efficient Data Collection in Widely Distributed Wireless Sensor Networks with Time Window and Precedence Constraints
by Peng Liu, Tingting Fu, Jia Xu and Yue Ding
Sensors 2017, 17(2), 421; https://doi.org/10.3390/s17020421 - 22 Feb 2017
Cited by 1 | Viewed by 4801
Abstract
In addition to the traditional densely deployed cases, widely distributed wireless sensor networks (WDWSNs) have begun to emerge. In these networks, sensors are far away from each other and have no network connections. In this paper, a special application of data collection for [...] Read more.
In addition to the traditional densely deployed cases, widely distributed wireless sensor networks (WDWSNs) have begun to emerge. In these networks, sensors are far away from each other and have no network connections. In this paper, a special application of data collection for WDWSNs is considered where each sensor (Unmanned Ground Vehicle, UGV) moves in a hazardous and complex terrain with many obstacles. They have their own work cycles and can be accessed only at a few locations. A mobile sink cruises on the ground to collect data gathered from these UGVs. Considerable delay is inevitable if the UGV and the mobile sink miss the meeting window or wait idly at the meeting spot. The unique challenge here is that, for each cycle of an UGV, there is only a limited time window for it to appear in front of the mobile sink. Therefore, we propose scheduling the path of a single mobile sink, targeted at visiting a maximum number of UGVs in a timely manner with the shortest path, according to the timing constraints bound by the cycles of UGVs. We then propose a bipartite matching based algorithm to reduce the number of mobile sinks. Simulation results show that the proposed algorithm can achieve performance close to the theoretical maximum determined by the duty cycle instance. Full article
Show Figures

Figure 1

Figure 1
<p>Sensor network that monitors a certain area with multiple events.</p>
Full article ">Figure 2
<p>All possible edges in the precedence graph.</p>
Full article ">Figure 3
<p>k-hop search for a better path (k = 3) (the coordinate of the point does not represent the physical location, and the length of an edge does not reflect the distance between two points).</p>
Full article ">Figure 4
<p>Collected data during a period.</p>
Full article ">Figure 5
<p>Two paths cross the same mutual point.</p>
Full article ">Figure 6
<p>Optimal path finding via bipartite matching between points groups.</p>
Full article ">Figure 7
<p>The demo shows how Algorithm 4 works.</p>
Full article ">Figure 8
<p>Simulation model.</p>
Full article ">Figure 9
<p>Amount of collected data (visited UGVs) during a period. (<b>a</b>) collected data in the ideal case; (<b>b</b>) collected data in the practice case.</p>
Full article ">Figure 10
<p>Distribution of the number of successful collected times for each UGV. (<b>a</b>) distribution in the ideal case; (<b>b</b>) distribution in the practice case.</p>
Full article ">Figure 11
<p>Performance comparison of different number of UGVs. (<b>a</b>) distribution in the ideal case; (<b>b</b>) distribution in the practice case.</p>
Full article ">Figure 12
<p>Experimental results of mobile sinks optimization in the ideal case and practice case. (<b>a</b>) number of mobile sinks in the ideal case; (<b>b</b>) number of mobile sinks in the practice case.</p>
Full article ">
10194 KiB  
Article
A Soft Sensor-Based Three-Dimensional (3-D) Finger Motion Measurement System
by Wookeun Park, Kyongkwan Ro, Suin Kim and Joonbum Bae
Sensors 2017, 17(2), 420; https://doi.org/10.3390/s17020420 - 22 Feb 2017
Cited by 64 | Viewed by 16989
Abstract
In this study, a soft sensor-based three-dimensional (3-D) finger motion measurement system is proposed. The sensors, made of the soft material Ecoflex, comprise embedded microchannels filled with a conductive liquid metal (EGaln). The superior elasticity, light weight, and sensitivity of soft sensors allows [...] Read more.
In this study, a soft sensor-based three-dimensional (3-D) finger motion measurement system is proposed. The sensors, made of the soft material Ecoflex, comprise embedded microchannels filled with a conductive liquid metal (EGaln). The superior elasticity, light weight, and sensitivity of soft sensors allows them to be embedded in environments in which conventional sensors cannot. Complicated finger joints, such as the carpometacarpal (CMC) joint of the thumb are modeled to specify the location of the sensors. Algorithms to decouple the signals from soft sensors are proposed to extract the pure flexion, extension, abduction, and adduction joint angles. The performance of the proposed system and algorithms are verified by comparison with a camera-based motion capture system. Full article
(This article belongs to the Special Issue Flexible Electronics and Sensors)
Show Figures

Figure 1

Figure 1
<p>Examples of finger motion measurement systems: (<b>a</b>) X-ray based system [<a href="#B2-sensors-17-00420" class="html-bibr">2</a>]; (<b>b</b>) Infrared (IR) camera based system [<a href="#B3-sensors-17-00420" class="html-bibr">3</a>]; (<b>c</b>) Three dimensional position sensor system [<a href="#B4-sensors-17-00420" class="html-bibr">4</a>]; (<b>d</b>) Optical linear encoder based system [<a href="#B5-sensors-17-00420" class="html-bibr">5</a>]; (<b>e</b>) Potentiometer-based system [<a href="#B6-sensors-17-00420" class="html-bibr">6</a>]; (<b>f</b>) CyberGlove [<a href="#B7-sensors-17-00420" class="html-bibr">7</a>]; (<b>g</b>) A glove with embedded strain sensors [<a href="#B8-sensors-17-00420" class="html-bibr">8</a>]; (<b>h</b>) Wearable soft artificial skin [<a href="#B9-sensors-17-00420" class="html-bibr">9</a>].</p>
Full article ">Figure 2
<p>Skeletal structure of the human hand.</p>
Full article ">Figure 3
<p>Configuration of the soft sensor-based 3-D finger motion measurement system.</p>
Full article ">Figure 4
<p>Prototypes of the soft sensors.</p>
Full article ">Figure 5
<p>Resistance change of the soft sensor with respect to the strain; (<b>a</b>) 10%∼100% strain; (<b>b</b>) 10%∼30% strain.</p>
Full article ">Figure 6
<p>Modeling of the metacarpophalangeal (MCP) and the proximal interphalangeal (PIP) joints; (<b>a</b>) Flexion/extension; (<b>b</b>) Abduction/adduction.</p>
Full article ">Figure 7
<p>The relationship between the abduction/adduction of the carpometacarpal (CMC) and MCP joints; (<b>a</b>) Abduction/adduction of the CMC and MCP joints of the thumb; (<b>b</b>) Linear relationship between the CMC and MCP joint angles.</p>
Full article ">Figure 8
<p>Examples of finger joint models; (<b>a</b>) Anatomical model for the CMC joint [<a href="#B18-sensors-17-00420" class="html-bibr">18</a>]; (<b>b</b>) finger joint model for the cyberglove [<a href="#B20-sensors-17-00420" class="html-bibr">20</a>].</p>
Full article ">Figure 9
<p>Proposed model for the finger joints.</p>
Full article ">Figure 10
<p>Experimental verification for the thumb finger model; (<b>a</b>) Defining rotation axes at the CMC joint; (<b>b</b>) Position of the thumb fingertip measured by the proposed model.</p>
Full article ">Figure 11
<p>Necessity of algorithms decoupling the soft sensor signals at the CMC and MCP joints.</p>
Full article ">Figure 12
<p>Decoupling algorithms for the sensor signals: (<b>a</b>) Decoupling algorithm for the MCP joint; (<b>b</b>) Decoupling algorithm for the CMC joint.</p>
Full article ">Figure 13
<p>Experimental setup to verify the 3-D finger motion: (<b>a</b>) Attached markers on the index; (<b>b</b>) Attached markers on the thumb.</p>
Full article ">Figure 14
<p>Calibration procedures.</p>
Full article ">Figure 15
<p>Comparison of the joint angles with MoCap data: (<b>a</b>) Comparison for the index finger; (<b>b</b>) Comparison for the thumb finger.</p>
Full article ">Figure 16
<p>Finger animation by the measured 3-D finger joint angles: (<b>a</b>) Measured 3-D motion of the index; (<b>b</b>) Measured 3-D motion of the thumb.</p>
Full article ">
4284 KiB  
Article
Long Term Amperometric Recordings in the Brain Extracellular Fluid of Freely Moving Immunocompromised NOD SCID Mice
by Caroline H. Reid and Niall J. Finnerty
Sensors 2017, 17(2), 419; https://doi.org/10.3390/s17020419 - 22 Feb 2017
Cited by 9 | Viewed by 4557
Abstract
We describe the in vivo characterization of microamperometric sensors for the real-time monitoring of nitric oxide (NO) and oxygen (O2) in the striatum of immunocompromised NOD SCID mice. The latter strain has been utilized routinely in the establishment of humanized models [...] Read more.
We describe the in vivo characterization of microamperometric sensors for the real-time monitoring of nitric oxide (NO) and oxygen (O2) in the striatum of immunocompromised NOD SCID mice. The latter strain has been utilized routinely in the establishment of humanized models of disease e.g., Parkinson’s disease. NOD SCID mice were implanted with highly sensitive and selective NO and O2 sensors that have been previously characterized both in vitro and in freely moving rats. Animals were systemically administered compounds that perturbed the amperometric current and confirmed sensor performance. Furthermore, the stability of the amperometric current was investigated and 24 h recordings examined. Saline injections caused transient changes in both currents that were not significant from baseline. l-NAME caused significant decreases in NO (p < 0.05) and O2 (p < 0.001) currents compared to saline. l-Arginine produced a significant increase (p < 0.001) in NO current, and chloral hydrate and Diamox (acetazolamide) caused significant increases in O2 signal (p < 0.01) compared against saline. The stability of both currents were confirmed over an eight-day period and analysis of 24-h recordings identified diurnal variations in both signals. These findings confirm the efficacy of the amperometric sensors to perform continuous and reliable recordings in immunocompromised mice. Full article
(This article belongs to the Section Biosensors)
Show Figures

Figure 1

Figure 1
<p>Averaged raw data % current response of (<b>A</b>) NO and (<b>B</b>) O<sub>2</sub> sensors implanted in the striatum of NOD SCID mice (<span class="html-italic">n</span> = 5) to a 1 mL·kg<sup>−1</sup> injection of 0.9% saline. Arrows indicate point of injection. Mean % current response represented by black trace, % error represented by grey trace.</p>
Full article ">Figure 2
<p>Averaged raw data % current response of (<b>A</b>) NO and (<b>B</b>) O<sub>2</sub> sensors implanted in the striatum of NOD SCID mice to a 30 mg·kg<sup>−1</sup> injection of <span class="html-small-caps">l</span>-NAME (<span class="html-italic">n</span> = 5). Arrows indicate point of injection. Mean % current response represented by black trace, % error represented by grey trace. Insets: Comparison of max current response (ΔI) for saline control vs. <span class="html-small-caps">l</span>-NAME ((<b>A</b>) NO sensor (<span class="html-italic">p</span> &lt; 0.05), unpaired <span class="html-italic">t</span>-test (<span class="html-italic">n</span> = 5) and (<b>B</b>) O<sub>2</sub> sensor (<span class="html-italic">p</span> &lt; 0.001), unpaired <span class="html-italic">t</span>-test (<span class="html-italic">n</span> = 5)). Data represented as ΔI ± SEM as compared to baseline. * denotes level of significance.</p>
Full article ">Figure 2 Cont.
<p>Averaged raw data % current response of (<b>A</b>) NO and (<b>B</b>) O<sub>2</sub> sensors implanted in the striatum of NOD SCID mice to a 30 mg·kg<sup>−1</sup> injection of <span class="html-small-caps">l</span>-NAME (<span class="html-italic">n</span> = 5). Arrows indicate point of injection. Mean % current response represented by black trace, % error represented by grey trace. Insets: Comparison of max current response (ΔI) for saline control vs. <span class="html-small-caps">l</span>-NAME ((<b>A</b>) NO sensor (<span class="html-italic">p</span> &lt; 0.05), unpaired <span class="html-italic">t</span>-test (<span class="html-italic">n</span> = 5) and (<b>B</b>) O<sub>2</sub> sensor (<span class="html-italic">p</span> &lt; 0.001), unpaired <span class="html-italic">t</span>-test (<span class="html-italic">n</span> = 5)). Data represented as ΔI ± SEM as compared to baseline. * denotes level of significance.</p>
Full article ">Figure 3
<p>Averaged raw data % current response of NO sensors implanted in the striatum of NOD SCID mice to a 200 mg·kg<sup>−1</sup> injection of <span class="html-small-caps">l</span>-arginine (<span class="html-italic">n</span> = 5). Arrow indicates point of injection. Mean % current response represented by black trace, % error represented by grey trace. Inset: Comparison of ΔI for saline control vs. <span class="html-small-caps">l</span>-arginine (<span class="html-italic">p</span> &lt; 0.001), unpaired <span class="html-italic">t</span>-test (<span class="html-italic">n</span> = 5). Data represented as ΔI ± SEM as compared to baseline. * denotes level of significance</p>
Full article ">Figure 4
<p>Averaged raw data % current response of O<sub>2</sub> sensors implanted in the striatum of NOD SCID mice to a (<b>A</b>) 350 mg·kg<sup>−1</sup> injection of chloral hydrate (<span class="html-italic">n</span> = 5) and (<b>B</b>) 50 mg·kg<sup>−1</sup> injection of Diamox. Arrows indicate point of injection. Mean % current response represented by black trace, % error represented by grey trace. Insets: Comparison of ∆I for (<b>A</b>) saline control vs. chloral hydrate (<span class="html-italic">p</span> &lt; 0.01), unpaired <span class="html-italic">t</span>-test (<span class="html-italic">n</span> = 5) and (<b>B</b>) saline control vs. Diamox (<span class="html-italic">p</span> &lt; 0.01), unpaired <span class="html-italic">t</span>-test (<span class="html-italic">n</span> = 5). Data represented as ∆I ± SEM as compared to baseline. * denotes level of significance.</p>
Full article ">Figure 4 Cont.
<p>Averaged raw data % current response of O<sub>2</sub> sensors implanted in the striatum of NOD SCID mice to a (<b>A</b>) 350 mg·kg<sup>−1</sup> injection of chloral hydrate (<span class="html-italic">n</span> = 5) and (<b>B</b>) 50 mg·kg<sup>−1</sup> injection of Diamox. Arrows indicate point of injection. Mean % current response represented by black trace, % error represented by grey trace. Insets: Comparison of ∆I for (<b>A</b>) saline control vs. chloral hydrate (<span class="html-italic">p</span> &lt; 0.01), unpaired <span class="html-italic">t</span>-test (<span class="html-italic">n</span> = 5) and (<b>B</b>) saline control vs. Diamox (<span class="html-italic">p</span> &lt; 0.01), unpaired <span class="html-italic">t</span>-test (<span class="html-italic">n</span> = 5). Data represented as ∆I ± SEM as compared to baseline. * denotes level of significance.</p>
Full article ">Figure 5
<p>Average % baseline currents for (<b>A</b>) NO sensors (<span class="html-italic">n</span> = 6) and (<b>B</b>) O<sub>2</sub> sensors (<span class="html-italic">n</span> = 5) implanted in the striatum of NOD SCID mice.</p>
Full article ">Figure 6
<p>Averaged 24 h concentration dynamics measured using (<b>A</b>) NO and (<b>B</b>) O<sub>2</sub> sensors implanted in the striatum of NOD SCID mice. Mean concentration represented by black trace, error represented by grey trace. 0–12 h represent light phase (07.00–19.00) and 12–24 h represent dark phase (19.00–07.00). Vertical dashed line signifies end of light phase/start of dark phase.</p>
Full article ">
4357 KiB  
Article
Data Collection and Analysis Using Wearable Sensors for Monitoring Knee Range of Motion after Total Knee Arthroplasty
by Chih-Yen Chiang, Kun-Hui Chen, Kai-Chun Liu, Steen Jun-Ping Hsu and Chia-Tai Chan
Sensors 2017, 17(2), 418; https://doi.org/10.3390/s17020418 - 22 Feb 2017
Cited by 73 | Viewed by 9111
Abstract
Total knee arthroplasty (TKA) is the most common treatment for degenerative osteoarthritis of that articulation. However, either in rehabilitation clinics or in hospital wards, the knee range of motion (ROM) can currently only be assessed using a goniometer. In order to provide continuous [...] Read more.
Total knee arthroplasty (TKA) is the most common treatment for degenerative osteoarthritis of that articulation. However, either in rehabilitation clinics or in hospital wards, the knee range of motion (ROM) can currently only be assessed using a goniometer. In order to provide continuous and objective measurements of knee ROM, we propose the use of wearable inertial sensors to record the knee ROM during the recovery progress. Digitalized and objective data can assist the surgeons to control the recovery status and flexibly adjust rehabilitation programs during the early acute inpatient stage. The more knee flexion ROM regained during the early inpatient period, the better the long-term knee recovery will be and the sooner early discharge can be achieved. The results of this work show that the proposed wearable sensor approach can provide an alternative for continuous monitoring and objective assessment of knee ROM recovery progress for TKA patients compared to the traditional goniometer measurements. Full article
(This article belongs to the Special Issue Wearable and Ambient Sensors for Healthcare and Wellness Applications)
Show Figures

Figure 1

Figure 1
<p>The workflow and tasks of this study.</p>
Full article ">Figure 2
<p>The coordinate system and the sensor model on the right leg. The X axis is the direction toward gravity, the Y axis is left-lateral direction and the Z axis is the walking direction. (S<sub>T</sub> denotes a sensor on the thigh. S<sub>S</sub> denotes a sensor on the shank. <math display="inline"> <semantics> <mrow> <msub> <mi mathvariant="bold-italic">r</mi> <mrow> <mi mathvariant="bold-italic">H</mi> <mi mathvariant="bold-italic">T</mi> </mrow> </msub> </mrow> </semantics> </math> means the distance from the hip joint to the thigh sensor. <math display="inline"> <semantics> <mrow> <msub> <mi mathvariant="bold-italic">r</mi> <mrow> <mi mathvariant="bold-italic">K</mi> <mi mathvariant="bold-italic">T</mi> </mrow> </msub> </mrow> </semantics> </math> means the distance from the knee joint to the thigh sensor, and <math display="inline"> <semantics> <mrow> <msub> <mi mathvariant="bold-italic">r</mi> <mrow> <mi mathvariant="bold-italic">K</mi> <mi mathvariant="bold-italic">S</mi> </mrow> </msub> </mrow> </semantics> </math> means the distance from the knee joint to the shank sensor).</p>
Full article ">Figure 3
<p><math display="inline"> <semantics> <mrow> <msub> <mi>θ</mi> <mn>1</mn> </msub> </mrow> </semantics> </math> is the inclined angle of <math display="inline"> <semantics> <mrow> <msub> <mi>a</mi> <mi>K</mi> </msub> <mo>−</mo> <mi>g</mi> </mrow> </semantics> </math> in relation to thigh segment and <math display="inline"> <semantics> <mrow> <msub> <mi>θ</mi> <mn>2</mn> </msub> </mrow> </semantics> </math> the inclined angle of <math display="inline"> <semantics> <mrow> <msub> <mi>a</mi> <mi>K</mi> </msub> <mo>−</mo> <mi>g</mi> </mrow> </semantics> </math> in relation to shank segment.</p>
Full article ">Figure 4
<p>The robotic arm for sensor calibration.</p>
Full article ">Figure 5
<p>Two sensors were mounted on the anterior side of the thigh and shank.</p>
Full article ">Figure 6
<p>The recovery knee ROM angles collected from 18 TKA patients at four stages.</p>
Full article ">Figure 7
<p>The recovery knee ROM between patients with and without Epidural PCA (EPCA).</p>
Full article ">Figure 8
<p>The recovery knee ROM between groups of patients differing in use of hemostatic agents.</p>
Full article ">Figure 9
<p>The recovery knee ROM between fat and thin patients (patients are grouped by BMI over and under 28).</p>
Full article ">Figure 10
<p>The results of questionnaire for surveying the sensor-worn scenario.</p>
Full article ">
5264 KiB  
Technical Note
Inertial Navigation System/Doppler Velocity Log (INS/DVL) Fusion with Partial DVL Measurements
by Asaf Tal, Itzik Klein and Reuven Katz
Sensors 2017, 17(2), 415; https://doi.org/10.3390/s17020415 - 22 Feb 2017
Cited by 126 | Viewed by 13574
Abstract
The Technion autonomous underwater vehicle (TAUV) is an ongoing project aiming to develop and produce a small AUV to carry on research missions, including payload dropping, and to demonstrate acoustic communication. Its navigation system is based on an inertial navigation system (INS) aided [...] Read more.
The Technion autonomous underwater vehicle (TAUV) is an ongoing project aiming to develop and produce a small AUV to carry on research missions, including payload dropping, and to demonstrate acoustic communication. Its navigation system is based on an inertial navigation system (INS) aided by a Doppler velocity log (DVL), magnetometer, and pressure sensor (PS). In many INSs, such as the one used in TAUV, only the velocity vector (provided by the DVL) can be used for aiding the INS, i.e., enabling only a loosely coupled integration approach. In cases of partial DVL measurements, such as failure to maintain bottom lock, the DVL cannot estimate the vehicle velocity. Thus, in partial DVL situations no velocity data can be integrated into the TAUV INS, and as a result its navigation solution will drift in time. To circumvent that problem, we propose a DVL-based vehicle velocity solution using the measured partial raw data of the DVL and additional information, thereby deriving an extended loosely coupled (ELC) approach. The implementation of the ELC approach requires only software modification. In addition, we present the TAUV six degrees of freedom (6DOF) simulation that includes all functional subsystems. Using this simulation, the proposed approach is evaluated and the benefit of using it is shown. Full article
(This article belongs to the Special Issue Inertial Sensors and Systems 2016)
Show Figures

Figure 1

Figure 1
<p>Scheme of loosely coupled and tightly coupled approaches for INS/DVL fusion. AUV: Autonomous underwater vehicle; DVL: Doppler velocity log; INS: inertial navigation system.</p>
Full article ">Figure 2
<p>Technion autonomous underwater vehicle six degrees of freedom (TAUV 6 DOF) simulation layout. IMU: inertial measurement unit.</p>
Full article ">Figure 3
<p>Top level diagram of the TAUV navigation system. EKF: extended Kalman filter. GPS: global positioning system.</p>
Full article ">Figure 4
<p>Platform frame and tangent frame in respect to the TAUV.</p>
Full article ">Figure 5
<p>DVL orientation setup in the TAUV platform frame.</p>
Full article ">Figure 6
<p>Extended loosely coupled (ELC) shame for utilizing partial measurements from the DVL.</p>
Full article ">Figure 7
<p>Block diagram for implementation of ELC approach in the TAUV navigation system.</p>
Full article ">Figure 8
<p>Simulation trajectories used to examine the ELC approaches.</p>
Full article ">Figure 9
<p>Root mean square (RMS) errors of velocity for all approaches obtained from trajectory #1. TC: tightly coupled.</p>
Full article ">Figure 10
<p>RMS errors of attitude for all approaches obtained from trajectory #1.</p>
Full article ">Figure 11
<p>RMS errors of velocity for all approaches obtained from trajectory #2.</p>
Full article ">Figure 12
<p>RMS errors of attitude for all approaches obtained from trajectory #2.</p>
Full article ">Figure 13
<p>RMS errors of velocity for all approaches obtained from trajectory #3.</p>
Full article ">Figure 14
<p>RMS errors of attitude for all approaches obtained from trajectory #3.</p>
Full article ">
4784 KiB  
Article
A Sensor Data Fusion System Based on k-Nearest Neighbor Pattern Classification for Structural Health Monitoring Applications
by Jaime Vitola, Francesc Pozo, Diego A. Tibaduiza and Maribel Anaya
Sensors 2017, 17(2), 417; https://doi.org/10.3390/s17020417 - 21 Feb 2017
Cited by 129 | Viewed by 11023
Abstract
Civil and military structures are susceptible and vulnerable to damage due to the environmental and operational conditions. Therefore, the implementation of technology to provide robust solutions in damage identification (by using signals acquired directly from the structure) is a requirement to reduce operational [...] Read more.
Civil and military structures are susceptible and vulnerable to damage due to the environmental and operational conditions. Therefore, the implementation of technology to provide robust solutions in damage identification (by using signals acquired directly from the structure) is a requirement to reduce operational and maintenance costs. In this sense, the use of sensors permanently attached to the structures has demonstrated a great versatility and benefit since the inspection system can be automated. This automation is carried out with signal processing tasks with the aim of a pattern recognition analysis. This work presents the detailed description of a structural health monitoring (SHM) system based on the use of a piezoelectric (PZT) active system. The SHM system includes: (i) the use of a piezoelectric sensor network to excite the structure and collect the measured dynamic response, in several actuation phases; (ii) data organization; (iii) advanced signal processing techniques to define the feature vectors; and finally; (iv) the nearest neighbor algorithm as a machine learning approach to classify different kinds of damage. A description of the experimental setup, the experimental validation and a discussion of the results from two different structures are included and analyzed. Full article
(This article belongs to the Special Issue System-Integrated Intelligence and Intelligent Systems)
Show Figures

Figure 1

Figure 1
<p>Classification of the machine learning approaches according to the learning.</p>
Full article ">Figure 2
<p>Data organization for sensor data fusion.</p>
Full article ">Figure 3
<p>Excitation signal applied to the piezoelectric acting as the actuator, in each actuation phase.</p>
Full article ">Figure 4
<p>Received signals in the actuation Phase 1 when Damage 1 is performed on the structure.</p>
Full article ">Figure 5
<p>Multiplexor system used in the data acquisition system.</p>
Full article ">Figure 6
<p>Representation of the structural health monitoring (SHM) system.</p>
Full article ">Figure 7
<p>General scheme of the SHM system.</p>
Full article ">Figure 8
<p>Training process with the data from the structure under different structural states.</p>
Full article ">Figure 9
<p>Testing step with data from the structure in an unknown structural state.</p>
Full article ">Figure 10
<p>Feature vector organization.</p>
Full article ">Figure 11
<p>Data organization and testing step for damage detection and classification.</p>
Full article ">Figure 12
<p>Aluminum rectangular profile instrumented with six piezoelectric sensors.</p>
Full article ">Figure 13
<p>Aluminum rectangular profile instrumented with six piezoelectric sensors and with four different damage.</p>
Full article ">Figure 14
<p>Cumulative contribution rate of variance accounted for the principal components.</p>
Full article ">Figure 15
<p>Confusion matrix using fine <span class="html-italic">k</span>-NN and medium <span class="html-italic">k</span>-NN when the feature vector is formed by the first principal component.</p>
Full article ">Figure 16
<p>Confusion matrix using coarse <span class="html-italic">k</span>-NN and cosine <span class="html-italic">k</span>-NN when the feature vector is formed by the first principal component.</p>
Full article ">Figure 17
<p>Confusion matrix using cubic <span class="html-italic">k</span>-NN and weighted <span class="html-italic">k</span>-NN when the feature vector is formed by the first principal component.</p>
Full article ">Figure 18
<p>Confusion matrix using weighted <span class="html-italic">k</span>-NN when the feature vector is formed by the first principal component (<b>a</b>) or by the first two principal components (<b>b</b>).</p>
Full article ">Figure 19
<p>Confusion matrix using weighted <span class="html-italic">k</span>-NN when the feature vector is formed by the first three principal component (<b>a</b>) or by the first four principal components (<b>b</b>).</p>
Full article ">Figure 20
<p>Confusion matrix using fine <span class="html-italic">k</span>-NN when the feature vector is formed by the first principal component (<b>a</b>) or by the first two principal components (<b>b</b>).</p>
Full article ">Figure 21
<p>Confusion matrix using fine <span class="html-italic">k</span>-NN when the feature vector is formed by the first three principal component (<b>a</b>) or by the first four principal components (<b>b</b>).</p>
Full article ">Figure 22
<p>First principal component versus second principal component in the aluminum rectangular profile described in <a href="#sec4dot1-sensors-17-00417" class="html-sec">Section 4.1</a>.</p>
Full article ">Figure 23
<p>Aluminum plate instrumented with four piezoelectric sensors.</p>
Full article ">Figure 24
<p>Experimental setup for the aluminum plate instrumented with four piezoelectric sensors and with different kind of damage.</p>
Full article ">Figure 25
<p>Cumulative contribution rate of variance accounting for the principal components from the data acquired from the aluminum plate.</p>
Full article ">Figure 26
<p>Confusion matrix using fine <span class="html-italic">k</span>-NN (<b>a</b>) and weighted <span class="html-italic">k</span>-NN (<b>b</b>) when the feature vector is formed by the first two principal components.</p>
Full article ">Figure 27
<p>Confusion matrix using coarse <span class="html-italic">k</span>-NN (<b>a</b>) and cosine <span class="html-italic">k</span>-NN (<b>b</b>) when the feature vector is formed by the first two principal components.</p>
Full article ">Figure 28
<p>First principal component versus second principal component in the aluminum plate described in <a href="#sec4dot2-sensors-17-00417" class="html-sec">Section 4.2</a>.</p>
Full article ">Figure 29
<p>Composite plate instrumented with six piezoelectric sensors.</p>
Full article ">Figure 30
<p>Experimental setup for the composite plate instrumented with six piezoelectric sensors.</p>
Full article ">Figure 31
<p>Cumulative contribution rate of variance accounted for the principal components from the data acquired from the composite plate.</p>
Full article ">Figure 32
<p>Confusion matrix using fine <span class="html-italic">k</span>-NN (<b>a</b>) and weighted <span class="html-italic">k</span>-NN (<b>b</b>) when the feature vector is formed by the first two principal components.</p>
Full article ">Figure 33
<p>Confusion matrix using coarse <span class="html-italic">k</span>-NN (<b>a</b>) and cosine <span class="html-italic">k</span>-NN (<b>b</b>) when the feature vector is formed by the first two principal components.</p>
Full article ">Figure 34
<p>First principal component versus second principal component in the composite plate described in <a href="#sec4dot3-sensors-17-00417" class="html-sec">Section 4.3</a>.</p>
Full article ">
20441 KiB  
Article
Dynamic Fluid in a Porous Transducer-Based Angular Accelerometer
by Siyuan Cheng, Mengyin Fu, Meiling Wang, Li Ming, Huijin Fu and Tonglei Wang
Sensors 2017, 17(2), 416; https://doi.org/10.3390/s17020416 - 21 Feb 2017
Cited by 14 | Viewed by 5357
Abstract
This paper presents a theoretical model of the dynamics of liquid flow in an angular accelerometer comprising a porous transducer in a circular tube of liquid. Wave speed and dynamic permeability of the transducer are considered to describe the relation between angular acceleration [...] Read more.
This paper presents a theoretical model of the dynamics of liquid flow in an angular accelerometer comprising a porous transducer in a circular tube of liquid. Wave speed and dynamic permeability of the transducer are considered to describe the relation between angular acceleration and the differential pressure on the transducer. The permeability and streaming potential coupling coefficient of the transducer are determined in the experiments, and special prototypes are utilized to validate the theoretical model in both the frequency and time domains. The model is applied to analyze the influence of structural parameters on the frequency response and the transient response of the fluidic system. It is shown that the radius of the circular tube and the wave speed affect the low frequency gain, as well as the bandwidth of the sensor. The hydrodynamic resistance of the transducer and the cross-section radius of the circular tube can be used to control the transient performance. The proposed model provides the basic techniques to achieve the optimization of the angular accelerometer together with the methodology to control the wave speed and the hydrodynamic resistance of the transducer. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Structure of the liquid-circular angular accelerometer (LCAA).</p>
Full article ">Figure 2
<p>Transducers and the microstructure. (<b>a</b>) Appearance of the transducers; (<b>b</b>) microstructure of the transducer of Type 1.</p>
Full article ">Figure 3
<p>Instrumentation of the permeability measurement (SurPASS, Anton Parr Co., Graz, Austria).</p>
Full article ">Figure 4
<p>Experimental results of the permeability.</p>
Full article ">Figure 5
<p>The relationship of streaming potential coupling coefficient versus experimental permeability.</p>
Full article ">Figure 6
<p>Frequency characteristic of the permeability. (<b>a</b>) Magnitude-frequency characteristic (MFC); (<b>b</b>) phase-frequency characteristic (PFC).</p>
Full article ">Figure 7
<p>Wave speed in the circular tube. (<b>a</b>) The influence of the gas volume on the wave speed; (<b>b</b>) the influence of wall thickness on the wave speed.</p>
Full article ">Figure 8
<p>Theoretical frequency response of the fluidic system.</p>
Full article ">Figure 9
<p>Experiments of Prototype A on the angle-vibration table.</p>
Full article ">Figure 10
<p>Experimental frequency response of the prototypes.</p>
Full article ">Figure 11
<p>Theoretical frequency response of the prototypes.</p>
Full article ">Figure 12
<p>Comparisons among the experimental frequency response and the theoretical results of Prototype B.</p>
Full article ">Figure 13
<p>Comparison between the output of the prototype and the theoretical model responding to the same angular acceleration input. Prototype B, <math display="inline"> <semantics> <mrow> <msub> <mi>R</mi> <mi>h</mi> </msub> <mo>=</mo> <mn>1.29</mn> <mo>×</mo> <msup> <mrow> <mn>10</mn> </mrow> <mrow> <mn>10</mn> </mrow> </msup> <mtext> </mtext> <mi>Ns</mi> <mo>/</mo> <msup> <mi mathvariant="normal">m</mi> <mn>5</mn> </msup> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msub> <mi>V</mi> <mi>g</mi> </msub> <mo>/</mo> <mi>V</mi> <mo>=</mo> <mn>5</mn> <mo>%</mo> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>D</mi> <mo>=</mo> <mn>2</mn> <mi>r</mi> <mo>=</mo> <mn>8</mn> <mtext> </mtext> <mi>mm</mi> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>e</mi> <mo>=</mo> <mn>2</mn> <mtext> </mtext> <mi>mm</mi> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msub> <mi>k</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>1.98</mn> <mo>×</mo> <msup> <mrow> <mn>10</mn> </mrow> <mrow> <mo>−</mo> <mn>12</mn> </mrow> </msup> <mtext> </mtext> <msup> <mi mathvariant="normal">m</mi> <mn>2</mn> </msup> </mrow> </semantics> </math>.</p>
Full article ">Figure 14
<p>Relationship of the fluidic system indexes versus the hydrodynamic resistance of the porous transducer, where <math display="inline"> <semantics> <mrow> <mi>a</mi> <mo>=</mo> <mn>20</mn> <mtext> </mtext> <mi mathvariant="normal">m</mi> <mo>/</mo> <mi mathvariant="normal">s</mi> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>R</mi> <mo>=</mo> <mn>25</mn> <mtext> </mtext> <mi>mm</mi> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>r</mi> <mo>=</mo> <mn>4</mn> <mtext> </mtext> <mi>mm</mi> <mo>.</mo> </mrow> </semantics> </math> (<b>a</b>) Variation of the low frequency gain and the bandwidth; (<b>b</b>) variation of the step response overshoot and the transient time.</p>
Full article ">Figure 15
<p>Changes of the frequency response and transient response resulting from <math display="inline"> <semantics> <mrow> <msub> <mi>R</mi> <mi>h</mi> </msub> </mrow> </semantics> </math> variation, where <math display="inline"> <semantics> <mrow> <mi>a</mi> <mo>=</mo> <mn>20</mn> <mtext> </mtext> <mi mathvariant="normal">m</mi> <mo>/</mo> <mi mathvariant="normal">s</mi> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>R</mi> <mo>=</mo> <mn>25</mn> <mtext> </mtext> <mi>mm</mi> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>r</mi> <mo>=</mo> <mn>4</mn> <mtext> </mtext> <mi>mm</mi> <mo>.</mo> </mrow> </semantics> </math> (<b>a</b>) Variation of the frequency response; (<b>b</b>) variation of the step response.</p>
Full article ">Figure 16
<p>Relationship of the fluidic system indexes versus the wave speed, where <math display="inline"> <semantics> <mrow> <msub> <mi>R</mi> <mi>h</mi> </msub> <mo>=</mo> <mn>2</mn> <mo>×</mo> <msup> <mrow> <mn>10</mn> </mrow> <mrow> <mn>10</mn> </mrow> </msup> <mtext> </mtext> <mi>Ns</mi> <mo>/</mo> <msup> <mi mathvariant="normal">m</mi> <mn>5</mn> </msup> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>R</mi> <mo>=</mo> <mn>25</mn> <mtext> </mtext> <mi>mm</mi> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>r</mi> <mo>=</mo> <mn>4</mn> <mtext> </mtext> <mi>mm</mi> </mrow> </semantics> </math>. (<b>a</b>) Variation of the low frequency gain and the bandwidth; (<b>b</b>) variation of the step response overshoot and the transient time.</p>
Full article ">Figure 17
<p>Relationship of the fluidic system indexes versus the radius of the circular tube, where <math display="inline"> <semantics> <mrow> <msub> <mi>R</mi> <mi>h</mi> </msub> <mo>=</mo> <mn>2</mn> <mo>×</mo> <msup> <mrow> <mn>10</mn> </mrow> <mrow> <mn>10</mn> </mrow> </msup> <mtext> </mtext> <mi>Ns</mi> <mo>/</mo> <msup> <mi mathvariant="normal">m</mi> <mn>5</mn> </msup> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>a</mi> <mo>=</mo> <mn>20</mn> <mtext> </mtext> <mi mathvariant="normal">m</mi> <mo>/</mo> <mi mathvariant="normal">s</mi> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>r</mi> <mo>=</mo> <mn>4</mn> <mtext> </mtext> <mi>mm</mi> </mrow> </semantics> </math>. (<b>a</b>) Variation of the low frequency gain and the bandwidth; (<b>b</b>) variation of the step response overshoot and the transient time.</p>
Full article ">Figure 18
<p>Relationship of the fluidic system indexes versus the cross-section radius of the circular tube, where <math display="inline"> <semantics> <mrow> <msub> <mi>R</mi> <mi>h</mi> </msub> <mo>=</mo> <mn>2</mn> <mo>×</mo> <msup> <mrow> <mn>10</mn> </mrow> <mrow> <mn>10</mn> <mo> </mo> </mrow> </msup> <mi>Ns</mi> <mo>/</mo> <msup> <mi mathvariant="normal">m</mi> <mn>5</mn> </msup> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>a</mi> <mo>=</mo> <mn>20</mn> <mtext> </mtext> <mi mathvariant="normal">m</mi> <mo>/</mo> <mi mathvariant="normal">s</mi> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>R</mi> <mo>=</mo> <mn>25</mn> <mtext> </mtext> <mi>mm</mi> </mrow> </semantics> </math>. (<b>a</b>) Variation of the low frequency gain and the bandwidth; (<b>b</b>) variation of the step response overshoot and the transient time.</p>
Full article ">Figure 19
<p>Frequency response of LCAA.</p>
Full article ">
3709 KiB  
Article
An Adaptive Multi-Sensor Data Fusion Method Based on Deep Convolutional Neural Networks for Fault Diagnosis of Planetary Gearbox
by Luyang Jing, Taiyong Wang, Ming Zhao and Peng Wang
Sensors 2017, 17(2), 414; https://doi.org/10.3390/s17020414 - 21 Feb 2017
Cited by 341 | Viewed by 19327
Abstract
A fault diagnosis approach based on multi-sensor data fusion is a promising tool to deal with complicated damage detection problems of mechanical systems. Nevertheless, this approach suffers from two challenges, which are (1) the feature extraction from various types of sensory data and [...] Read more.
A fault diagnosis approach based on multi-sensor data fusion is a promising tool to deal with complicated damage detection problems of mechanical systems. Nevertheless, this approach suffers from two challenges, which are (1) the feature extraction from various types of sensory data and (2) the selection of a suitable fusion level. It is usually difficult to choose an optimal feature or fusion level for a specific fault diagnosis task, and extensive domain expertise and human labor are also highly required during these selections. To address these two challenges, we propose an adaptive multi-sensor data fusion method based on deep convolutional neural networks (DCNN) for fault diagnosis. The proposed method can learn features from raw data and optimize a combination of different fusion levels adaptively to satisfy the requirements of any fault diagnosis task. The proposed method is tested through a planetary gearbox test rig. Handcraft features, manual-selected fusion levels, single sensory data, and two traditional intelligent models, back-propagation neural networks (BPNN) and a support vector machine (SVM), are used as comparisons in the experiment. The results demonstrate that the proposed method is able to detect the conditions of the planetary gearbox effectively with the best diagnosis accuracy among all comparative methods in the experiment. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>A typical architecture of deep convolutional neural network (DCNN).</p>
Full article ">Figure 2
<p>Flowchart of the proposed method.</p>
Full article ">Figure 3
<p>Flowcharts of comparative methods: (<b>a</b>) Data-level fusion methods; (<b>b</b>) Feature-level fusion methods; (<b>c</b>) Decision-level fusion methods; and (<b>d</b>) Methods with single sensory data.</p>
Full article ">Figure 4
<p>Planetary gearbox test rig.</p>
Full article ">Figure 5
<p>Four types of sensors and their installations.</p>
Full article ">Figure 6
<p>Faulty planetary gears: (<b>a</b>) Pitting tooth; (<b>b</b>) Chaffing tooth; (<b>c</b>) Chipped tooth; (<b>d</b>) Root crack tooth; (<b>e</b>) Slight worn tooth; and (<b>f</b>) Worn tooth.</p>
Full article ">Figure 7
<p>Testing accuracy of eight trials of the top three methods (the proposed method, the DCNN model with feature learning and feature-level fusion, and the SVM model with handcraft features and feature-level fusion). FL = feature learning; FE = manual feature extraction; DF = data-level fusion; FF = feature-level fusion.</p>
Full article ">Figure 8
<p>Testing accuracy of all the comparative methods.</p>
Full article ">Figure 9
<p>Principal component analysis (PCA) of the experimental data and learned features: (<b>a</b>) Data-level fused input data of the proposed method; (<b>b</b>) Features of the proposed method; (<b>c</b>) Feature-level fused features learned through DCNN; and (<b>d</b>) Feature-level fused handcraft features.</p>
Full article ">
7199 KiB  
Article
Analyzing the Effects of UAV Mobility Patterns on Data Collection in Wireless Sensor Networks
by Sarmad Rashed and Mujdat Soyturk
Sensors 2017, 17(2), 413; https://doi.org/10.3390/s17020413 - 20 Feb 2017
Cited by 44 | Viewed by 6674
Abstract
Sensor nodes in a Wireless Sensor Network (WSN) can be dispersed over a remote sensing area (e.g., the regions that are hardly accessed by human beings). In such kinds of networks, datacollectionbecomesoneofthemajorissues. Getting connected to each sensor node and retrieving the information in [...] Read more.
Sensor nodes in a Wireless Sensor Network (WSN) can be dispersed over a remote sensing area (e.g., the regions that are hardly accessed by human beings). In such kinds of networks, datacollectionbecomesoneofthemajorissues. Getting connected to each sensor node and retrieving the information in time introduces new challenges. Mobile sink usage—especially Unmanned Aerial Vehicles (UAVs)—is the most convenient approach to covering the area and accessing each sensor node in such a large-scale WSN. However, the operation of the UAV depends on some parameters, such as endurance time, altitude, speed, radio type in use, and the path. In this paper, we explore various UAV mobility patterns that follow different paths to sweep the operation area in order to seek the best area coverage with the maximum number of covered nodes in the least amount of time needed by the mobile sink. We also introduce a new metric to formulate the tradeoff between maximizing the covered nodes and minimizing the operation time when choosing the appropriate mobility pattern. A realistic simulation environment is used in order to compare and evaluate the performance of the system. We present the performance results for the explored UAV mobility patterns. The results are very useful to present the tradeoff between maximizing the covered nodes and minimizing the operation time to choose the appropriate mobility pattern. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Network Model; (<b>b</b>) Instant sweep (coverage) region of the unmanned aerial vehicle (UAV) [<a href="#B12-sensors-17-00413" class="html-bibr">12</a>].</p>
Full article ">Figure 2
<p>Illustration of the sweep width with different UAV altitudes. Sweep width depends on the altitude of the UAV.</p>
Full article ">Figure 3
<p>Illustration of explored mobility patterns.</p>
Full article ">Figure 4
<p>Comparison of the number of cluster heads (CHs) and the number of covered nodes for all mobility patterns.</p>
Full article ">Figure 5
<p>Comparison of the Time Spent Per Covered Node for all mobility patterns.</p>
Full article ">Figure 6
<p>Comparison of the Utilization for all mobility patterns.</p>
Full article ">Figure 7
<p>Comparison of Time versus Coverage Efficiency for all mobility patterns.</p>
Full article ">
5945 KiB  
Article
An Improved Azimuth Angle Estimation Method with a Single Acoustic Vector Sensor Based on an Active Sonar Detection System
by Anbang Zhao, Lin Ma, Xuefei Ma and Juan Hui
Sensors 2017, 17(2), 412; https://doi.org/10.3390/s17020412 - 20 Feb 2017
Cited by 25 | Viewed by 6646
Abstract
In this paper, an improved azimuth angle estimation method with a single acoustic vector sensor (AVS) is proposed based on matched filtering theory. The proposed method is mainly applied in an active sonar detection system. According to the conventional passive method based on [...] Read more.
In this paper, an improved azimuth angle estimation method with a single acoustic vector sensor (AVS) is proposed based on matched filtering theory. The proposed method is mainly applied in an active sonar detection system. According to the conventional passive method based on complex acoustic intensity measurement, the mathematical and physical model of this proposed method is described in detail. The computer simulation and lake experiments results indicate that this method can realize the azimuth angle estimation with high precision by using only a single AVS. Compared with the conventional method, the proposed method achieves better estimation performance. Moreover, the proposed method does not require complex operations in frequencydomain and achieves computational complexity reduction. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Projection of a wave vector in the Cartesian coordinate system.</p>
Full article ">Figure 2
<p>Principle chart of bar graph approach based on complex acoustic intensity measurements.</p>
Full article ">Figure 3
<p>Principle chart of azimuth angle estimation based on matched filtering.</p>
Full article ">Figure 4
<p>Detection signal waveform in time-domain and the slope of frequency modulation.</p>
Full article ">Figure 5
<p>Channel impulse responses.</p>
Full article ">Figure 6
<p>The processing results: (<b>a</b>) The result of the conventional method; (<b>b</b>) The result of the proposed method.</p>
Full article ">Figure 7
<p>Comparison of DOA estimation error between two methods: (<b>a</b>) the signal length is 0.2 s; (<b>b</b>) the signal length is 0.5 s; (<b>c</b>) the signal length is 1 s; (<b>d</b>) the signal length is 2 s.</p>
Full article ">Figure 8
<p>The schematic diagram of the open water experimental layout.</p>
Full article ">Figure 9
<p>(<b>a</b>) The AVS and electronic equipment cabin used in the experiments; (<b>b</b>) The AVS directivity patterns (<span class="html-italic">@</span>630 Hz).</p>
Full article ">Figure 10
<p>The AVS sensitivity between 400 and 1500 Hz.</p>
Full article ">Figure 11
<p>(<b>a</b>) The CTD; (<b>b</b>) The sound speed profile.</p>
Full article ">Figure 12
<p>The azimuth angle estimation results of the acoustic source ship. (<b>a</b>) The result of the conventional method; (<b>b</b>) The result of the proposed method.</p>
Full article ">Figure 13
<p>The azimuth angle estimation results of the target transponder ship. (<b>a</b>) The result of the conventional method; (<b>b</b>) Th result of the proposed method.</p>
Full article ">
8857 KiB  
Article
Dynamic Method of Neutral Axis Position Determination and Damage Identification with Distributed Long-Gauge FBG Sensors
by Yongsheng Tang and Zhongdao Ren
Sensors 2017, 17(2), 411; https://doi.org/10.3390/s17020411 - 20 Feb 2017
Cited by 19 | Viewed by 6186
Abstract
The neutral axis position (NAP) is a key parameter of a flexural member for structure design and safety evaluation. The accuracy of NAP measurement based on traditional methods does not satisfy the demands of structural performance assessment especially under live traffic loads. In [...] Read more.
The neutral axis position (NAP) is a key parameter of a flexural member for structure design and safety evaluation. The accuracy of NAP measurement based on traditional methods does not satisfy the demands of structural performance assessment especially under live traffic loads. In this paper, a new method to determine NAP is developed by using modal macro-strain (MMS). In the proposed method, macro-strain is first measured with long-gauge Fiber Bragg Grating (FBG) sensors; then the MMS is generated from the measured macro-strain with Fourier transform; and finally the neutral axis position coefficient (NAPC) is determined from the MMS and the neutral axis depth is calculated with NAPC. To verify the effectiveness of the proposed method, some experiments on FE models, steel beam and reinforced concrete (RC) beam were conducted. From the results, the plane section was first verified with MMS of the first bending mode. Then the results confirmed the high accuracy and stability for assessing NAP. The results also proved that the NAPC was a good indicator of local damage. In summary, with the proposed method, accurate assessment of flexural structures can be facilitated. Full article
(This article belongs to the Special Issue Recent Advances in Fiber Bragg Grating Sensing)
Show Figures

Figure 1

Figure 1
<p>Strain measurement with strain gauge: (<b>a</b>) strain gauge locates at the crack; and (<b>b</b>) strain gauge locates at some distance away from the crack.</p>
Full article ">Figure 2
<p>Area sensing with long-gauge sensors.</p>
Full article ">Figure 3
<p>Packaged long-gauge Fiber Bragg Grating (FBG) sensor.</p>
Full article ">Figure 4
<p>Macro-strain measurements for beam structure.</p>
Full article ">Figure 5
<p>Strain distribution along the cross section with sensors installed at: (<b>a</b>) different sides of the neutral axis; and (<b>b</b>) the same side of the neutral axis.</p>
Full article ">Figure 6
<p>Finite Element (FE) beam model: (<b>a</b>) beam; (<b>b</b>) cross section; and (<b>c</b>) damaged zone and monitored zone.</p>
Full article ">Figure 7
<p>Damage sensitivity with different spatial sensor installations: (<b>a</b>) sensor installation; and (<b>b</b>) relative change of neutral axis position coefficient (NAPC).</p>
Full article ">Figure 8
<p>Typical strain results at Case I: (<b>a</b>) the whole time history; and (<b>b</b>) local part focused.</p>
Full article ">Figure 9
<p>Performance of the proposed method under noise: (<b>a</b>) NAPC; and (<b>b</b>) neutral axis depth.</p>
Full article ">Figure 10
<p>Performance of the traditional method under noise at Case: (<b>a</b>) I; and (<b>b</b>) D3.</p>
Full article ">Figure 11
<p>Experimental description for the steel beam tests (Unit: mm): (<b>a</b>) sensor installation at steel beam; and (<b>b</b>) loading position.</p>
Full article ">Figure 12
<p>The steel beam experiment field: (<b>a</b>) dynamic test and data logger; and (<b>b</b>) damage simulation.</p>
Full article ">Figure 13
<p>Typical macro-strain results for the steel beam test under: (<b>a</b>) single-point excitation; and (<b>b</b>) random excitation.</p>
Full article ">Figure 14
<p>Typical results of macro-strain based frequency spectrum for the steel beam test.</p>
Full article ">Figure 15
<p>Modal macro-strain (MMS) distribution along the depth of the element E<sub>3</sub> of the steel beam at Case: (<b>a</b>) I; and (<b>b</b>) D8.</p>
Full article ">Figure 16
<p>NAPC extracting of the steel beam from dynamic tests for: (<b>a</b>) E<sub>1</sub>; and (<b>b</b>) E<sub>3</sub>.</p>
Full article ">Figure 17
<p>NAPC results of the steel beam: (<b>a</b>) absolute value; and (<b>b</b>) relative change.</p>
Full article ">Figure 18
<p>Experiment setup (Unit: mm): (<b>a</b>) reinforced concrete (RC) beam; (<b>b</b>) long-gauge FBG sensor distribution; and (<b>c</b>) loading positions.</p>
Full article ">Figure 19
<p>Experiment field: (<b>a</b>) beam specimen, loader and data logger; and (<b>b</b>) long-gauge FBG sensors.</p>
Full article ">Figure 20
<p>Concrete crack distribution at Case: (<b>a</b>) D1; (<b>b</b>) D2; (<b>c</b>) D3; and (<b>d</b>) D4.</p>
Full article ">Figure 20 Cont.
<p>Concrete crack distribution at Case: (<b>a</b>) D1; (<b>b</b>) D2; (<b>c</b>) D3; and (<b>d</b>) D4.</p>
Full article ">Figure 21
<p>Typical macro-strain results under: (<b>a</b>) single-point excitation; and (<b>b</b>) random excitation.</p>
Full article ">Figure 22
<p>Typical results of macro-strain based frequency spectrum.</p>
Full article ">Figure 23
<p>MMS distribution along the depth of the element E<sub>3</sub> at Case: (<b>a</b>) I; (<b>b</b>) D1; (<b>c</b>) D2; (<b>d</b>) D3; and (<b>e</b>) D4.</p>
Full article ">Figure 24
<p>NAPC extracting from dynamic tests for: (<b>a</b>) E<sub>1</sub>; (<b>b</b>) E<sub>2</sub>; (<b>c</b>) E<sub>3</sub>; (<b>d</b>) E<sub>4</sub>; and (<b>e</b>) E<sub>5</sub>.</p>
Full article ">Figure 25
<p>Results of NAPC: (<b>a</b>) absolute value; and (<b>b</b>) relative change.</p>
Full article ">Figure 26
<p>Results of neutral axis depth: (<b>a</b>) absolute value; and (<b>b</b>) relative change.</p>
Full article ">
3874 KiB  
Article
Toward Improving Electrocardiogram (ECG) Biometric Verification using Mobile Sensors: A Two-Stage Classifier Approach
by Robin Tan and Marek Perkowski
Sensors 2017, 17(2), 410; https://doi.org/10.3390/s17020410 - 20 Feb 2017
Cited by 67 | Viewed by 6890
Abstract
Electrocardiogram (ECG) signals sensed from mobile devices pertain the potential for biometric identity recognition applicable in remote access control systems where enhanced data security is demanding. In this study, we propose a new algorithm that consists of a two-stage classifier combining random forest [...] Read more.
Electrocardiogram (ECG) signals sensed from mobile devices pertain the potential for biometric identity recognition applicable in remote access control systems where enhanced data security is demanding. In this study, we propose a new algorithm that consists of a two-stage classifier combining random forest and wavelet distance measure through a probabilistic threshold schema, to improve the effectiveness and robustness of a biometric recognition system using ECG data acquired from a biosensor integrated into mobile devices. The proposed algorithm is evaluated using a mixed dataset from 184 subjects under different health conditions. The proposed two-stage classifier achieves a total of 99.52% subject verification accuracy, better than the 98.33% accuracy from random forest alone and 96.31% accuracy from wavelet distance measure algorithm alone. These results demonstrate the superiority of the proposed algorithm for biometric identification, hence supporting its practicality in areas such as cloud data security, cyber-security or remote healthcare systems. Full article
(This article belongs to the Section Biosensors)
Show Figures

Figure 1

Figure 1
<p>Electrocardiogram (ECG) P-QRS-T complex and fiducial characteristic points.</p>
Full article ">Figure 2
<p>Biometric patient verification system using mobile ECG.</p>
Full article ">Figure 3
<p>Proposed two-stage ECG biometric subject verification algorithm. The fiducial feature <math display="inline"> <semantics> <mrow> <msubsup> <mi>X</mi> <mi>q</mi> <mi>j</mi> </msubsup> <mrow> <mo>(</mo> <mi>l</mi> <mo>)</mo> </mrow> </mrow> </semantics> </math> is first applied to the random forest classifier, to identify the limited <math display="inline"> <semantics> <mi>K</mi> </semantics> </math> potential candidates (usually <math display="inline"> <semantics> <mrow> <mi>K</mi> <mo>&lt;</mo> <mn>5</mn> </mrow> </semantics> </math>). Next, the non-fiducial feature <math display="inline"> <semantics> <mrow> <msubsup> <mi>X</mi> <mi>q</mi> <mi>j</mi> </msubsup> <mrow> <mo>(</mo> <mi>w</mi> <mo>)</mo> </mrow> </mrow> </semantics> </math> (wavelet coefficient) is applied to a 1-to-<math display="inline"> <semantics> <mi>K</mi> </semantics> </math> template matching classifier to make the final decision.</p>
Full article ">Figure 4
<p>ECG up-sampling using cubic spline data interpolation. Left: 128 Hz; Right: 360 Hz.</p>
Full article ">Figure 5
<p>3D array of P-QRS-T complexes.</p>
Full article ">Figure 6
<p>Feature extraction based on fiducial points.</p>
Full article ">Figure 7
<p>The 5-level decomposition using Daubechies discrete wavelet transform (DWT).</p>
Full article ">Figure 8
<p>ECG original complex (top); and decomposed signal (bottom).</p>
Full article ">Figure 9
<p>Random forest classifier for fiducial-based features. j represents an unknown subject; <math display="inline"> <semantics> <mi>T</mi> </semantics> </math> is the total number of decision trees used in the random forest algorithm; <math display="inline"> <semantics> <mi>N</mi> </semantics> </math> is the total number of subjects stored in the database; and <math display="inline"> <semantics> <mrow> <msup> <mi>P</mi> <mrow> <mi>j</mi> <mi>i</mi> </mrow> </msup> </mrow> </semantics> </math> is a probability vector with size <math display="inline"> <semantics> <mi>N</mi> </semantics> </math>, indicating <math display="inline"> <semantics> <mi>j</mi> </semantics> </math> as being classified as any subject <math display="inline"> <semantics> <mrow> <mi>i</mi> <mtext> </mtext> <mrow> <mo>(</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mo>…</mo> <mi>N</mi> </mrow> <mo>)</mo> </mrow> </mrow> </semantics> </math> in the database.</p>
Full article ">Figure 10
<p>1-to-<math display="inline"> <semantics> <mi>K</mi> </semantics> </math> template matching using wavelet distance measure.</p>
Full article ">Figure 11
<p>ECG biometric patient verification accuracy using random forest classifier only. ML-3 represents the features extracted from the Q, R, and S points; ML-5 represents the features extracted from the P, Q, R, S, and T points; and ML-9 further includes the wave boundaries P<sub>on</sub>, P<sub>off</sub>, T<sub>on</sub>, and T<sub>off</sub> (according to <a href="#sensors-17-00410-t002" class="html-table">Table 2</a>).</p>
Full article ">Figure 12
<p>ECG biometric patient verification accuracy using 1-to-<math display="inline"> <semantics> <mi>N</mi> </semantics> </math> classifier only. Three combinations of wavelet distance (WDIST) coefficients are presented. WDIST1-5: <math display="inline"> <semantics> <mrow> <mi mathvariant="normal">D</mi> <mn>1</mn> </mrow> </semantics> </math> to <math display="inline"> <semantics> <mrow> <mi mathvariant="normal">D</mi> <mn>5</mn> </mrow> </semantics> </math>; WDIST2-5: <math display="inline"> <semantics> <mrow> <mi mathvariant="normal">D</mi> <mn>2</mn> </mrow> </semantics> </math> to <math display="inline"> <semantics> <mrow> <mi mathvariant="normal">D</mi> <mn>5</mn> </mrow> </semantics> </math>; and WDIST3-5: <math display="inline"> <semantics> <mrow> <mi mathvariant="normal">D</mi> <mn>3</mn> </mrow> </semantics> </math> to <math display="inline"> <semantics> <mrow> <mi mathvariant="normal">D</mi> <mn>5</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 13
<p>Comparisons of classifiers: the machine learning (ML-5), the one-to-many template matching (WDIST2-5), and the proposed two-stage classifiers (ML-5 + WDIST2-5) on the four individual datasets and the combined large dataset.</p>
Full article ">
3048 KiB  
Article
Quasi-Static Calibration Method of a High-g Accelerometer
by Yan Wang, Jinbiao Fan, Jing Zu and Peng Xu
Sensors 2017, 17(2), 409; https://doi.org/10.3390/s17020409 - 20 Feb 2017
Cited by 15 | Viewed by 5276
Abstract
To solve the problem of resonance during quasi-static calibration of high-g accelerometers, we deduce the relationship between the minimum excitation pulse width and the resonant frequency of the calibrated accelerometer according to the second-order mathematical model of the accelerometer, and improve the quasi-static [...] Read more.
To solve the problem of resonance during quasi-static calibration of high-g accelerometers, we deduce the relationship between the minimum excitation pulse width and the resonant frequency of the calibrated accelerometer according to the second-order mathematical model of the accelerometer, and improve the quasi-static calibration theory. We establish a quasi-static calibration testing system, which uses a gas gun to generate high-g acceleration signals, and apply a laser interferometer to reproduce the impact acceleration. These signals are used to drive the calibrated accelerometer. By comparing the excitation acceleration signal and the output responses of the calibrated accelerometer to the excitation signals, the impact sensitivity of the calibrated accelerometer is obtained. As indicated by the calibration test results, this calibration system produces excitation acceleration signals with a pulse width of less than 1000 μs, and realize the quasi-static calibration of high-g accelerometers with a resonant frequency above 20 kHz when the calibration error was 3%. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Half-sine excitation pulse.</p>
Full article ">Figure 2
<p>Amplitude-frequency characteristics of the accelerometer.</p>
Full article ">Figure 3
<p>Sensor’s frequency domain response to the two pulsed excitations.</p>
Full article ">Figure 4
<p>Sensor’s time domain response to the two pulsed excitations.</p>
Full article ">Figure 5
<p>Schematic diagram of the quasi-static calibration system. (1) High-pressure gas chamber; (2) valve; (3) projectile; (4) launching tube; (5) silencer; (6) collision object (sensor mounting base); (7) calibrated accelerometer; (8) guiding cylinder; (9) liquid gas buffer device; and (10) laser interferometer.</p>
Full article ">Figure 6
<p>Photo of the projectile before collision with the collision object.</p>
Full article ">Figure 7
<p>Finite element model of the collision process.</p>
Full article ">Figure 8
<p>Acceleration pulse generated at the same projectile velocity but with different masses of the impacted body (diameter of the contact surface: 50 mm).</p>
Full article ">Figure 9
<p>Acceleration pulse waveforms generated by different contact areas.</p>
Full article ">Figure 10
<p>Acceleration pulse curves corresponding to different pad thicknesses.</p>
Full article ">Figure 11
<p>Doppler signal from the interferometer.</p>
Full article ">Figure 12
<p>Excitation acceleration from laser interferometer and response acceleration from the accelerometer.</p>
Full article ">Figure 13
<p>Fitted sensitivity curve of the calibrated accelerometer.</p>
Full article ">
7613 KiB  
Article
Formal Uncertainty and Dispersion of Single and Double Difference Models for GNSS-Based Attitude Determination
by Wen Chen, Chao Yu, Danan Dong, Miaomiao Cai, Feng Zhou, Zhiren Wang, Lei Zhang and Zhengqi Zheng
Sensors 2017, 17(2), 408; https://doi.org/10.3390/s17020408 - 20 Feb 2017
Cited by 17 | Viewed by 5308
Abstract
With multi-antenna synchronized global navigation satellite system (GNSS) receivers, the single difference (SD) between two antennas is able to eliminate both satellite and receiver clock error, thus it becomes necessary to reconsider the equivalency problem between the SD and double difference (DD) models. [...] Read more.
With multi-antenna synchronized global navigation satellite system (GNSS) receivers, the single difference (SD) between two antennas is able to eliminate both satellite and receiver clock error, thus it becomes necessary to reconsider the equivalency problem between the SD and double difference (DD) models. In this paper, we quantitatively compared the formal uncertainties and dispersions between multiple SD models and the DD model, and also carried out static and kinematic short baseline experiments. The theoretical and experimental results show that under a non-common clock scheme the SD and DD model are equivalent. Under a common clock scheme, if we estimate stochastic uncalibrated phase delay (UPD) parameters every epoch, this SD model is still equivalent to the DD model, but if we estimate only one UPD parameter for all epochs or take it as a known constant, the SD (here called SD2) and DD models are no longer equivalent. For the vertical component of baseline solutions, the formal uncertainties of the SD2 model are two times smaller than those of the DD model, and the dispersions of the SD2 model are even more than twice smaller than those of the DD model. In addition, to obtain baseline solutions, the SD2 model requires a minimum of three satellites, while the DD model requires a minimum of four satellites, which makes the SD2 more advantageous in attitude determination under sheltered environments. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>Antenna configuration in the roof test.</p>
Full article ">Figure 2
<p>Number of visible satellites.</p>
Full article ">Figure 3
<p>Yaw and pitch angles from the DD and SD1 models. (<b>a</b>) Yaw angles; (<b>b</b>) Pitch angles.</p>
Full article ">Figure 4
<p>Yaw and pitch angles from the DD and SD2 models. (<b>a</b>) Yaw angles; (<b>b</b>) Pitch angles.</p>
Full article ">Figure 5
<p>Yaw and pitch angles from the DD and SD models before and after multipath correction. (<b>a</b>) Yaw angles from the DD model; (<b>b</b>) Pitch angles from the DD model; (<b>c</b>) Yaw angles from the SD1 model; (<b>d</b>) Pitch angles from the SD1 model; (<b>e</b>) Yaw angles from the SD2 model; (<b>f</b>) Pitch angles from the SD2 model.</p>
Full article ">Figure 6
<p>Antenna configurations in vehicle test.</p>
Full article ">Figure 7
<p>Trajectory of the vehicle. The red line is the whole trajectory, while the blue line represents the road section where the satellite visibility is poor. The five bridges are annotated with numbers.</p>
Full article ">Figure 8
<p>Number of visible satellites with time. Two spans with poor satellite visibility are marked with yellow bars.</p>
Full article ">Figure 9
<p>Yaw and pitch angles from the DD and SD1 models. (<b>a</b>) Yaw angles; (<b>b</b>) Pitch angles. The time periods when the pitch angles with poor accuracy are marked with yellow bars, corresponding to the annotated epochs in <a href="#sensors-17-00408-f008" class="html-fig">Figure 8</a>.</p>
Full article ">Figure 10
<p>Yaw and pitch angles from the DD and SD2 models. (<b>a</b>) Yaw angles; (<b>b</b>) Pitch angles. The time periods when the pitch angles with poor accuracy are marked with yellow bars.</p>
Full article ">Figure 11
<p>Pitch angles from the SD2 and FOG-SINS in the stage-2 experiment.</p>
Full article ">Figure 12
<p>Boxplot of pitch angles from the three models.</p>
Full article ">Figure 13
<p>Pitch angles from the SD1 model and Trimble. (<b>a</b>) Original; (<b>b</b>) Stage-2. The first abnormal time period is labeled with “A”.</p>
Full article ">Figure 14
<p>SD observables during period A. (<b>a</b>) Original; (<b>b</b>) Stage-2.</p>
Full article ">
7251 KiB  
Article
Towards a Semantic Web of Things: A Hybrid Semantic Annotation, Extraction, and Reasoning Framework for Cyber-Physical System
by Zhenyu Wu, Yuan Xu, Yunong Yang, Chunhong Zhang, Xinning Zhu and Yang Ji
Sensors 2017, 17(2), 403; https://doi.org/10.3390/s17020403 - 20 Feb 2017
Cited by 38 | Viewed by 8939
Abstract
Web of Things (WoT) facilitates the discovery and interoperability of Internet of Things (IoT) devices in a cyber-physical system (CPS). Moreover, a uniform knowledge representation of physical resources is quite necessary for further composition, collaboration, and decision-making process in CPS. Though several efforts [...] Read more.
Web of Things (WoT) facilitates the discovery and interoperability of Internet of Things (IoT) devices in a cyber-physical system (CPS). Moreover, a uniform knowledge representation of physical resources is quite necessary for further composition, collaboration, and decision-making process in CPS. Though several efforts have integrated semantics with WoT, such as knowledge engineering methods based on semantic sensor networks (SSN), it still could not represent the complex relationships between devices when dynamic composition and collaboration occur, and it totally depends on manual construction of a knowledge base with low scalability. In this paper, to addresses these limitations, we propose the semantic Web of Things (SWoT) framework for CPS (SWoT4CPS). SWoT4CPS provides a hybrid solution with both ontological engineering methods by extending SSN and machine learning methods based on an entity linking (EL) model. To testify to the feasibility and performance, we demonstrate the framework by implementing a temperature anomaly diagnosis and automatic control use case in a building automation system. Evaluation results on the EL method show that linking domain knowledge to DBpedia has a relative high accuracy and the time complexity is at a tolerant level. Advantages and disadvantages of SWoT4CPS with future work are also discussed. Full article
(This article belongs to the Special Issue System-Integrated Intelligence and Intelligent Systems)
Show Figures

Figure 1

Figure 1
<p>The SWoT system overview.</p>
Full article ">Figure 2
<p>A reference SWoT-O ontology model by extending SSN and reusing other IoT ontologies. Default SWoT namespace have been omitted.</p>
Full article ">Figure 3
<p>WoT metadata is semi-structural relational data with key-value pairs. The EL Annotator can transfer the WoT metadata into tabular data and perform entity linking tasks. EL tasks use a probabilistic graphical model to jointly inference the linking and mapping.</p>
Full article ">Figure 4
<p>The SPARQL query for generating candidate entities for cell text.</p>
Full article ">Figure 5
<p>The SPARQL query used for generating candidate types.</p>
Full article ">Figure 6
<p>The SPARQL query used for generating candidate relations.</p>
Full article ">Figure 7
<p>Factor graph of a given relational table.</p>
Full article ">Figure 8
<p>The SPARQL query of a semantic fuzzy search on common sense knowledge.</p>
Full article ">Figure 9
<p>Creating the <span class="html-italic">FoI</span> and <span class="html-italic">PhysicalProcess</span> model.</p>
Full article ">Figure 10
<p>Creating the anomaly diagnosis model.</p>
Full article ">Figure 11
<p>Creating the automatic control model.</p>
Full article ">Figure 12
<p>Building up the SWoT-O ontology for a building automation system with Protégé.</p>
Full article ">Figure 13
<p>Parts of the EL results stored in Neo4i graph database. The blue circles represent the linked entities and types to DBpedia. It is annotated as the “linkTo” property.</p>
Full article ">Figure 14
<p>A snapshot of the AG table containing devices with five columns and corresponding row cell texts.</p>
Full article ">Figure 15
<p>The top-N ranking lists of parts of the candidate entities before and after re-ranking. The left figure is the initial ranking list before re-ranking, while the right figure is the re-ranked top-N lists with ranking scores.</p>
Full article ">
1556 KiB  
Article
Compliment Graphene Oxide Coating on Silk Fiber Surface via Electrostatic Force for Capacitive Humidity Sensor Applications
by Kook In Han, Seungdu Kim, In Gyu Lee, Jong Pil Kim, Jung-Ha Kim, Suck Won Hong, Byung Jin Cho and Wan Sik Hwang
Sensors 2017, 17(2), 407; https://doi.org/10.3390/s17020407 - 19 Feb 2017
Cited by 22 | Viewed by 8129
Abstract
Cylindrical silk fiber (SF) was coated with Graphene oxide (GO) for capacitive humidity sensor applications. Negatively charged GO in the solution was attracted to the positively charged SF surface via electrostatic force without any help from adhesive intermediates. The magnitude of the positively [...] Read more.
Cylindrical silk fiber (SF) was coated with Graphene oxide (GO) for capacitive humidity sensor applications. Negatively charged GO in the solution was attracted to the positively charged SF surface via electrostatic force without any help from adhesive intermediates. The magnitude of the positively charged SF surface was controlled through the static electricity charges created on the SF surface. The GO coating ability on the SF improved as the SF’s positive charge increased. The GO-coated SFs at various conditions were characterized using an optical microscope, scanning electron microscopy (SEM), energy-dispersive X-ray spectroscopy (EDS), Raman spectroscopy, and LCR meter. Unlike the intact SF, the GO-coated SF showed clear response-recovery behavior and well-behaved repeatability when it was exposed to 20% relative humidity (RH) and 90% RH alternatively in a capacitive mode. This approach allows humidity sensors to take advantage of GO’s excellent sensing properties and SF’s flexibility, expediting the production of flexible, low power consumption devices at relatively low costs. Full article
(This article belongs to the Special Issue Humidity Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Relative tendency of electrons to move from the SF when it contacts and separates from other materials [<a href="#B28-sensors-17-00407" class="html-bibr">28</a>]; (<b>b</b>) Various GO-coated SFs at various conditions.</p>
Full article ">Figure 2
<p>(<b>a</b>) Optical images of various GO-coated SFs, and SEM image of SW and SL (scale bar 10 μm); (<b>b</b>) Raman mapping image of SW, SG, SA, and SL, respectively; (<b>c</b>) Conductance of GO-coated SFs (SG, SA, and SL).</p>
Full article ">Figure 3
<p>(<b>a</b>) SF resistance depending on distance at different coating times (Open symbols are experimental values whose average value is marked as solid symbol); (<b>b</b>) SF resistivity as a function of coating time at different coating temperatures.</p>
Full article ">Figure 4
<p>(<b>a</b>) Schematic image of capacitive humidity sensor where GO coated SF was implemented. The average diameter of SF is 0.17 cm; (<b>b</b>) The capacitance and its derivative curve of the sensor when the humidity changed between 20% RH and 90% RH. The dot represents the intact silk between the two cu plates; (<b>c</b>) Schematic image of H<sub>2</sub>O absorption (RH 90%) and desorption (RH 20%) characteristics on GO coated SF.</p>
Full article ">
4400 KiB  
Article
An Optical Fibre Depth (Pressure) Sensor for Remote Operated Vehicles in Underwater Applications
by Dinesh Babu Duraibabu, Sven Poeggel, Edin Omerdic, Romano Capocci, Elfed Lewis, Thomas Newe, Gabriel Leen, Daniel Toal and Gerard Dooly
Sensors 2017, 17(2), 406; https://doi.org/10.3390/s17020406 - 19 Feb 2017
Cited by 34 | Viewed by 8163
Abstract
A miniature sensor for accurate measurement of pressure (depth) with temperature compensation in the ocean environment is described. The sensor is based on an optical fibre Extrinsic Fabry-Perot interferometer (EFPI) combined with a Fibre Bragg Grating (FBG). The EFPI provides pressure measurements while [...] Read more.
A miniature sensor for accurate measurement of pressure (depth) with temperature compensation in the ocean environment is described. The sensor is based on an optical fibre Extrinsic Fabry-Perot interferometer (EFPI) combined with a Fibre Bragg Grating (FBG). The EFPI provides pressure measurements while the Fibre Bragg Grating (FBG) provides temperature measurements. The sensor is mechanically robust, corrosion-resistant and suitable for use in underwater applications. The combined pressure and temperature sensor system was mounted on-board a mini remotely operated underwater vehicle (ROV) in order to monitor the pressure changes at various depths. The reflected optical spectrum from the sensor was monitored online and a pressure or temperature change caused a corresponding observable shift in the received optical spectrum. The sensor exhibited excellent stability when measured over a 2 h period underwater and its performance is compared with a commercially available reference sensor also mounted on the ROV. The measurements illustrates that the EFPI/FBG sensor is more accurate for depth measurements (depth of ~0.020 m). Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the Extrinsic Fabry-Perot interferometer (EFPI) sensor.</p>
Full article ">Figure 2
<p>Sensor fabrication process: (<b>a</b>) The Capillary and the diaphragm (MM) fibre was polished (<b>b</b>) The polished fibres are fused together (<b>c</b>) The single mode fiber with Fibre Bragg Grating (FBG) is inserted into the capillary and fused with a air cavity.</p>
Full article ">Figure 2 Cont.
<p>Sensor fabrication process: (<b>a</b>) The Capillary and the diaphragm (MM) fibre was polished (<b>b</b>) The polished fibres are fused together (<b>c</b>) The single mode fiber with Fibre Bragg Grating (FBG) is inserted into the capillary and fused with a air cavity.</p>
Full article ">Figure 3
<p>(<b>a</b>) Schematic of the EFPI/FBG sensor; (<b>b</b>) Reflection spectrum of the sensor.</p>
Full article ">Figure 4
<p>Optical System Setup: (<b>a</b>) Schematic of the optical system; (<b>b</b>) Optical System setup for using two sensors.</p>
Full article ">Figure 5
<p>Measurement setup (<b>a</b>) Sensors mounted on remotely operated vehicle (ROV), and (<b>b</b>) Sensors placement on the mount.</p>
Full article ">Figure 6
<p>Measurement setup (<b>a</b>) Sensor calibration in a watercolumn, and (<b>b</b>) Calibration measurement and the stability of the sensor for 1 h.</p>
Full article ">Figure 7
<p>(<b>a</b>) Sensor calibration in the pressure chamber and (<b>b</b>) EFPI/FBG spectrum with initial and under pressure.</p>
Full article ">Figure 8
<p>Pressure response of EFPI/FBG sensor for 5 bar.</p>
Full article ">Figure 9
<p>(<b>a</b>) Theoretical Simualtion with no flow, and (<b>b</b>) Theoretical Simulation with flow of 6 knots (3 m/s) at 10 bar pressure.</p>
Full article ">Figure 10
<p>(<b>a</b>) Sensor calibration in the temperature chamber; and (<b>b</b>) EFPI/FBG spectrum with initial and under temperature.</p>
Full article ">Figure 11
<p>Pressure (depth) response of the sensors mounted on the ROV where section (A) represents the ROV moving slowly to the surface in the tank, section (B) represents the ROV moving slowly in the bottom of the tank and section (C) represents the ROV moving rapidly to the surface and to the bottom of the tank.</p>
Full article ">Figure 12
<p>(<b>a</b>) Pressure (depth) response of the sensors mounted on the ROV from <a href="#sensors-17-00406-f011" class="html-fig">Figure 11</a> section (A); and (<b>b</b>) Illustration of ROV orientation.</p>
Full article ">Figure 13
<p>Pressure (depth) response of the sensors at the bottom of the tank when moving continuously from <a href="#sensors-17-00406-f011" class="html-fig">Figure 11</a>(B).</p>
Full article ">
3238 KiB  
Article
A Biocompatible Colorimetric Triphenylamine- Dicyanovinyl Conjugated Fluorescent Probe for Selective and Sensitive Detection of Cyanide Ion in Aqueous Media and Living Cells
by Zi-Hua Zheng, Zhi-Ke Li, Lin-Jiang Song, Qi-Wei Wang, Qing-Fei Huang and Li Yang
Sensors 2017, 17(2), 405; https://doi.org/10.3390/s17020405 - 19 Feb 2017
Cited by 10 | Viewed by 8152
Abstract
A colorimetric and turn-on fluorescent probe 1 bearing triphenylamine-thiophene and dicyanovinyl groups has been synthesized and used to detect cyanide anion via a nucleophilic addition reaction. Probe 1 exhibited prominent selectivity and sensitivity towards CN in aqueous media, even in the presence [...] Read more.
A colorimetric and turn-on fluorescent probe 1 bearing triphenylamine-thiophene and dicyanovinyl groups has been synthesized and used to detect cyanide anion via a nucleophilic addition reaction. Probe 1 exhibited prominent selectivity and sensitivity towards CN in aqueous media, even in the presence of other anions such as S2−, HS, SO32−, S2O32−, S2O82−, I, Br, Cl, F, NO2, N3, SO42−, SCN, HCO3, CO32− and AcO. Moreover, a low detection limit (LOD, 51 nM) was observed. In addition, good cell membrane permeability and low cytotoxicity to HeLa cells were also observed, suggesting its promising potential in bio-imaging. Full article
(This article belongs to the Special Issue Colorimetric and Fluorescent Sensor)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) Fluorescent responses of probe <b>1</b> (5 μM) towards various anions (50 μM) in PBS/DMSO (4/6, pH = 7.4) solution. λ<sub>ex</sub> = 370 nm, Slits: 2.5 nm/5 nm; (<b>b</b>) Competing responses of probe <b>1</b> (5 μM) at 480 nm towards various analytes (50 μM) in PBS/DMSO (4/6, pH = 7.4) solution. Black bar, probe and probe + anions; red bar, probe + CN<sup>−</sup> + anions. λ<sub>ex</sub> = 370 nm, Slits: 2.5 nm/5 nm.</p>
Full article ">Figure 2
<p>Fluorescence images of probe <b>1</b> (5 μM) upon addition of various anions (50 μM) in PBS/DMSO (4/6, pH = 7.4) solution under visible light (<b>a</b>); and 365 nm UV lamp (<b>b</b>).</p>
Full article ">Figure 3
<p>(<b>a</b>) Fluorescence spectra of probe <b>1</b> (5 μM) in the presence of various concentration of CN<sup>−</sup> (0–25 μM) in PBS/DMSO (4/6, pH = 7.4) solution; (<b>b</b>) Linear relation between the fluorescent intensity at 480 nm and the concentration of CN<sup>−</sup> in the range of 0–25 μM. λ<sub>ex</sub> = 370 nm, Slits: 2.5 nm/5 nm.</p>
Full article ">Figure 4
<p><sup>1</sup>H-NMR spectral changes of probe <b>1</b> (20 mM) upon addition of CN<sup>−</sup> (as tetrabutylammonium salts) in CDCl<sub>3</sub>.</p>
Full article ">Figure 5
<p>HeLa cell viability values (%) estimated by MTT assay at different concentrations of probe <b>1</b> for 24 h.</p>
Full article ">Figure 6
<p>Images of living HeLa cells: HeLa cells incubated with probe <b>1</b> (10 μM) for 1 h and then with different concentration of CN<sup>−</sup> ((<b>A</b>,<b>E</b>) 0 μM; (<b>B</b>,<b>F</b>) 10 μM; (<b>C</b>,<b>G</b>) 80 μM; (<b>D</b>,<b>H</b>) 160 μM) for 15 min. (<b>A</b>–<b>D</b>) is Bright field; (<b>E</b>–<b>H</b>) is under fluorescence (λ<sub>ex</sub> = 385 nm, λ<sub>em</sub> = 405–440 nm) blue channel (425 nm–520 nm), scale bar: 20 μm.</p>
Full article ">Figure 7
<p>Images of living HeLa cells: HeLa cells incubated with different concentration of probe <b>1</b> ((<b>A</b>,<b>E</b>) 0 μM; (<b>B</b>,<b>F</b>) 2.5 μM; (<b>C</b>,<b>G</b>) 5 μM; (<b>D</b>,<b>H</b>) 10 μM) for 1 h and then all cells were incubated with 80 µM CN<sup>−</sup> for 15 min. (<b>A</b>–<b>D</b>) is Bright field; (<b>E</b>–<b>H</b>) is under fluorescence (λ<sub>ex</sub> = 385 nm, λ<sub>em</sub> = 405–440 nm), scale bar: 20 μm.</p>
Full article ">Scheme 1
<p>The sensing mechanism of probe <b>1</b>.</p>
Full article ">Scheme 2
<p>The synthesis of probe <b>1</b>; <span class="html-italic">Reagents and Conditions</span>: (<b>a</b>) NBS, DMF, r.t., 99% yield; (<b>b</b>) (5-formylthiophen-2-yl) boronic acid, Pd(PPh<sub>3</sub>)<sub>4</sub>, dioxane, reflux, 3.5 h, 67% yield; (<b>c</b>) malononitrile, piperidine, reflux, 3 h, 70% yield.</p>
Full article ">
13945 KiB  
Article
Remote Sensing of Urban Microclimate Change in L’Aquila City (Italy) after Post-Earthquake Depopulation in an Open Source GIS Environment
by Valerio Baiocchi, Fabio Zottele and Donatella Dominici
Sensors 2017, 17(2), 404; https://doi.org/10.3390/s17020404 - 19 Feb 2017
Cited by 17 | Viewed by 6049
Abstract
This work reports a first attempt to use Landsat satellite imagery to identify possible urban microclimate changes in a city center after a seismic event that affected L’Aquila City (Abruzzo Region, Italy), on 6 April 2009. After the main seismic event, the collapse [...] Read more.
This work reports a first attempt to use Landsat satellite imagery to identify possible urban microclimate changes in a city center after a seismic event that affected L’Aquila City (Abruzzo Region, Italy), on 6 April 2009. After the main seismic event, the collapse of part of the buildings, and the damaging of most of them, with the consequence of an almost total depopulation of the historic city center, may have caused alterations to the microclimate. This work develops an inexpensive work flow—using Landsat Enhanced Thematic Mapper Plus (ETM+) scenes—to construct the evolution of urban land use after the catastrophic main seismic event that hit L’Aquila. We hypothesized, that, possibly, before the event, the temperature was higher in the city center due to the presence of inhabitants (and thus home heating); while the opposite case occurred in the surrounding areas, where new settlements of inhabitants grew over a period of a few months. We decided not to look to independent meteorological data in order to avoid being biased in their investigations; thus, only the smallest dataset of Landsat ETM+ scenes were considered as input data in order to describe the thermal evolution of the land surface after the earthquake. We managed to use the Landsat archive images to provide thermal change indications, useful for understanding the urban changes induced by catastrophic events, setting up an easy to implement, robust, reproducible, and fast procedure. Full article
Show Figures

Figure 1

Figure 1
<p>The city center after the event.</p>
Full article ">Figure 2
<p>L’Aquila, before (<b>a</b>) and after (<b>b</b>) the main seismic event: Overlapped grid is Universal Transverse Mercator Projection (UTM) WGS84/ETRF00. Scale bar (upper right) is 4 km, and the north-pointing arrow (lower left) indicates the geographic north.</p>
Full article ">Figure 3
<p>Study area of L’Aquila and its neighboring areas. Scenes copyrighted by Google Imagery, Mapdata, and Terrametrics. The image was reprojected from Spherical Mercator to WGS84; image is around 20 km in width.</p>
Full article ">Figure 4
<p>The complete image set of the Landsat ETM+ scenes where the three bands (red, green blue) acquired in the visible spectrum were blended together (Red, Blue and Green (RGB) composite), projected to UTM33N WGS84 datum, images are around 23 km in width.</p>
Full article ">Figure 5
<p>The workflow of the procedure adopted for image processing.</p>
Full article ">Figure 6
<p>RGB composite of the four image-sets used for Land Surface Temperature estimation Scan Line corrector (SCL)-off mask applied, no cloud cover assessment); images are around 12 km in width.</p>
Full article ">Figure 7
<p>RGB composite of the four image-sets used for Land Surface Temperature estimation, with both SCL-off and cloud cover assessment masks. Images are around 12 km in width.</p>
Full article ">Figure 8
<p>Land Surface Temperature Maps (LST) without rescale. The gray pixels correspond to the SCL-off and cloud cover assessment masks. Images are around 12 km in width.</p>
Full article ">Figure 9
<p>Land Surface Temperature Maps with rescale applied (LSTr). The gray pixels correspond to the SCL-off and cloud cover assessment masks. Images are around 12 km in width.</p>
Full article ">
5114 KiB  
Article
Gimbal Influence on the Stability of Exterior Orientation Parameters of UAV Acquired Images
by Mateo Gašparović and Luka Jurjević
Sensors 2017, 17(2), 401; https://doi.org/10.3390/s17020401 - 18 Feb 2017
Cited by 41 | Viewed by 12201
Abstract
In this paper, results from the analysis of the gimbal impact on the determination of the camera exterior orientation parameters of an Unmanned Aerial Vehicle (UAV) are presented and interpreted. Additionally, a new approach and methodology for testing the influence of gimbals on [...] Read more.
In this paper, results from the analysis of the gimbal impact on the determination of the camera exterior orientation parameters of an Unmanned Aerial Vehicle (UAV) are presented and interpreted. Additionally, a new approach and methodology for testing the influence of gimbals on the exterior orientation parameters of UAV acquired images is presented. The main motive of this study is to examine the possibility of obtaining better geometry and favorable spatial bundles of rays of images in UAV photogrammetric surveying. The subject is a 3-axis brushless gimbal based on a controller board (Storm32). Only two gimbal axes are taken into consideration: roll and pitch axes. Testing was done in a flight simulation, and in indoor and outdoor flight mode, to analyze the Inertial Measurement Unit (IMU) and photogrammetric data. Within these tests the change of the exterior orientation parameters without the use of a gimbal is determined, as well as the potential accuracy of the stabilization with the use of a gimbal. The results show that using a gimbal has huge potential. Significantly, smaller discrepancies between data are noticed when a gimbal is used in flight simulation mode, even four times smaller than in other test modes. In this test the potential accuracy of a low budget gimbal for application in real conditions is determined. Full article
(This article belongs to the Special Issue UAV-Based Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>A Xiaomi Yi action camera; (<b>a</b>) in improving phase; (<b>b</b>) on a 3-axes gimbal on an Unmanned Aerial Vehicle (UAV).</p>
Full article ">Figure 2
<p>Chessboard test field Ground Control Point (GCP) detection in Matlab.</p>
Full article ">Figure 3
<p>Pinhole camera model.</p>
Full article ">Figure 4
<p>Control loop of the gimbal controller.</p>
Full article ">Figure 5
<p>(<b>a</b>) Used gimbal; (<b>b</b>) gimbal controller (Storm32) board and primary Inertial Measurement Unit (IMU) (MPU6050).</p>
Full article ">Figure 6
<p>Indoor flight test.</p>
Full article ">Figure 7
<p>Exterior orientation parameters acquired photogrammetrically and by the Inertial Measurement Unit (IMU) for pitch parameter.</p>
Full article ">Figure 8
<p>Exterior orientation parameters acquired photogrammetrically and by the Inertial Measurement Unit (IMU) for roll parameter.</p>
Full article ">Figure 9
<p>Roll and pitch parameter values and trend line for three sessions of indoor flight.</p>
Full article ">Figure 10
<p>Roll and pitch parameter values and trend line for both sessions of outdoor flight.</p>
Full article ">Figure 11
<p>Display of statistical data of indoor flight for: (<b>a</b>) pitch parameter and (<b>b</b>) roll parameter.</p>
Full article ">Figure 12
<p>Display of statistical data of outdoor flight for: (<b>a</b>) pitch parameter and (<b>b</b>) roll parameter.</p>
Full article ">
3483 KiB  
Article
Comparative Study of Different Methods for Soot Sensing and Filter Monitoring in Diesel Exhausts
by Markus Feulner, Gunter Hagen, Kathrin Hottner, Sabrina Redel, Andreas Müller and Ralf Moos
Sensors 2017, 17(2), 400; https://doi.org/10.3390/s17020400 - 18 Feb 2017
Cited by 17 | Viewed by 9279
Abstract
Due to increasingly tighter emission limits for diesel and gasoline engines, especially concerning particulate matter emissions, particulate filters are becoming indispensable devices for exhaust gas after treatment. Thereby, for an efficient engine and filter control strategy and a cost-efficient filter design, reliable technologies [...] Read more.
Due to increasingly tighter emission limits for diesel and gasoline engines, especially concerning particulate matter emissions, particulate filters are becoming indispensable devices for exhaust gas after treatment. Thereby, for an efficient engine and filter control strategy and a cost-efficient filter design, reliable technologies to determine the soot load of the filters and to measure particulate matter concentrations in the exhaust gas during vehicle operation are highly needed. In this study, different approaches for soot sensing are compared. Measurements were conducted on a dynamometer diesel engine test bench with a diesel particulate filter (DPF). The DPF was monitored by a relatively new microwave-based approach. Simultaneously, a resistive type soot sensor and a Pegasor soot sensing device as a reference system measured the soot concentration exhaust upstream of the DPF. By changing engine parameters, different engine out soot emission rates were set. It was found that the microwave-based signal may not only indicate directly the filter loading, but by a time derivative, the engine out soot emission rate can be deduced. Furthermore, by integrating the measured particulate mass in the exhaust, the soot load of the filter can be determined. In summary, all systems coincide well within certain boundaries and the filter itself can act as a soot sensor. Full article
(This article belongs to the Collection Gas Sensors)
Show Figures

Figure 1

Figure 1
<p>Experimental setup for the dynamometer tests with the different measuring devices: resistive soot sensor and Pegasor sensing device to measure the soot concentration of the raw exhaust, the vector analyzer VNA to obtain the transmission parameter |<span class="html-italic">S</span><sub>21</sub>| in the microwave range between 0.9 and 2.1 GHz and the differential pressure sensor (Δ<span class="html-italic">p</span>) for soot load determination. Furthermore, the filter is weighed from time to time.</p>
Full article ">Figure 2
<p>Test procedure: within the seven operation points, different soot concentrations in the exhaust were achieved by varying engine speed, accelerator pedal position, EGR-rate, injection pressure (<span class="html-italic">p</span><sub>injection</sub>), and boost pressure (<span class="html-italic">p</span><sub>boost</sub>). Lower graph: resulting exhaust gas temperature and volume flow during the test.</p>
Full article ">Figure 3
<p>Setup of the microwave-based method for filter monitoring.</p>
Full article ">Figure 4
<p>Upper graph: soot load of the DPF, determined by weighing (open squares), with the Pegasor sensing device (solid line, corrected), and with the resistive soot sensor (dashed line, corrected). The vertical bars indicate regeneration events of the resistive soot sensor. Lower graph: microwave-derived transmission parameters, |<span class="html-italic">S</span><sub>21</sub>| in dB, and differential pressure, Δ<span class="html-italic">p</span>.</p>
Full article ">Figure 5
<p>Correlation of the timely differentiated resistive soot sensor signal, d<span class="html-italic">I</span>/d<span class="html-italic">t</span>, and the soot concentration in the exhaust, <span class="html-italic">c</span><sub>soot</sub>, as determined by the Pegasor sensing device. The dashed regression line follows the equation: <span class="html-italic">c</span><sub>soot</sub>/mg/m<sup>3</sup> = 152.76·d<span class="html-italic">I</span>/d<span class="html-italic">t</span>/(mA/s) + 10.6.</p>
Full article ">Figure 6
<p>Correlation of the frequency-averaged transmission parameter |<span class="html-italic">S</span><sub>21</sub>|and the soot load of the DPF. The dashed regression line follows the equation: |<span class="html-italic">S</span><sub>21</sub>|/dB = −3.061·<span class="html-italic">m</span><sub>soot</sub>/(<span class="html-italic">g</span>/<span class="html-italic">l</span><sub>DPF</sub>) − 14.64.</p>
Full article ">Figure 7
<p>Time derivative of the frequency-averaged transmission parameter |<span class="html-italic">S</span><sub>21</sub>|over the averaged soot mass concentration in the exhaust upstream of the DPF as determined by the Pegasor sensing device. Each point was obtained from one of the operation points ① to ⑦, as indicated in <a href="#sensors-17-00400-f004" class="html-fig">Figure 4</a>. The exponential equation d|<span class="html-italic">S</span><sub>21</sub>|/d<span class="html-italic">t</span>/(dB/s) = 0.24138 · exp(−<span class="html-italic">c</span><sub>soot</sub>/(mg/m<sup>3</sup>)/3315.8663) − 0.24062 was used for regression (dashed).</p>
Full article ">Figure 8
<p>Time derivative of the differential pressure (normalized to exhaust flow) over the averaged soot mass concentration in the exhaust upstream of the DPF as determined by the Pegasor sensing device. Each point was obtained from one of the operation points ① to ⑦, as indicated in <a href="#sensors-17-00400-f004" class="html-fig">Figure 4</a>. The exponential equation d<span class="html-italic">p</span>/d<span class="html-italic">t</span>/(mbar/s) = 5.29354 × 10<sup>−4</sup> · exp(−<span class="html-italic">c</span><sub>soot</sub>/(mg/m<sup>3</sup>)/24.93527) − 3.58073 × 10<sup>−4</sup> was used for regression (dashed).</p>
Full article ">Figure 9
<p>Comparison of all measurement devices in determining the averaged soot mass concentration, <span class="html-italic">c</span><sub>soot</sub>, in the raw exhaust: measured with the Pegasor sensing device (◊), derived from the resistive soot sensor signal (∆), from the time derivative of the averaged microwave-derived transmission |<span class="html-italic">S</span><sub>21</sub>| (□) and from the time derivative of the differential pressure ∆<span class="html-italic">p</span> (○).</p>
Full article ">
362 KiB  
Article
Non-Dispersive Infrared Sensor for Online Condition Monitoring of Gearbox Oil
by Markus S. Rauscher, Anton J. Tremmel, Michael Schardt and Alexander W. Koch
Sensors 2017, 17(2), 399; https://doi.org/10.3390/s17020399 - 18 Feb 2017
Cited by 26 | Viewed by 8220
Abstract
The condition of lubricating oil used in automotive and industrial gearboxes must be controlled in order to guarantee optimum performance and prevent damage to machinery parts. In normal practice, this is done by regular oil change intervals and routine laboratory analysis, both of [...] Read more.
The condition of lubricating oil used in automotive and industrial gearboxes must be controlled in order to guarantee optimum performance and prevent damage to machinery parts. In normal practice, this is done by regular oil change intervals and routine laboratory analysis, both of which involve considerable operating costs. In this paper, we present a compact and robust optical sensor that can be installed in the lubrication circuit to provide quasi-continuous information about the condition of the oil. The measuring principle is based on non-dispersive infrared spectroscopy. The implemented sensor setup consists of an optical measurement cell, two thin-film infrared emitters, and two four-channel pyroelectric detectors equipped with optical bandpass filters. We present a method based on multivariate partial least squares regression to select appropriate optical bandpass filters for monitoring the oxidation, water content, and acid number of the oil. We perform a ray tracing analysis to analyze and correct the influence of the light path in the optical setup on the optical parameters of the bandpass filters. The measurement values acquired with the sensor for three different gearbox oil types show high correlation with laboratory reference data for the oxidation, water content, and acid number. The presented sensor can thus be a useful supplementary tool for the online condition monitoring of lubricants when integrated into a gearbox oil circuit. Full article
(This article belongs to the Section Chemical Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Section of the infrared absorption spectrum of a wind turbine gearbox oil with different oxidation levels ranging from 3.29 A/cm to 7.03 A/cm according to ASTM E2412; (<b>b</b>) Section of the infrared absorption spectrum of an automotive gearbox oil contaminated with different concentrations of water ranging from 0 ppm to 1991 ppm, corresponding to ASTM E2412 readings of 8.67 A/cm to 32.7 A/cm.</p>
Full article ">Figure 2
<p>Regression coefficient vector <math display="inline"> <semantics> <msub> <mi>b</mi> <mn>1</mn> </msub> </semantics> </math> constructed by the partial least squares (PLS) algorithm for the WTO and the MGO oil samples. The local maxima of <math display="inline"> <semantics> <msub> <mi>b</mi> <mn>1</mn> </msub> </semantics> </math> are highlighted, as they denote spectral regions with high influence on the acid number (AN).</p>
Full article ">Figure 3
<p>(<b>a</b>) Functional principle of the sensor implementation. The light emitted by two infrared (IR) emitters travels through the measurement cell containing the oil sample being tested. It is then detected by two four-channel pyroelectric detectors equipped with different optical bandpass filters; (<b>b</b>) Sectional computer-aided design model of the sensor prototype.</p>
Full article ">Figure 4
<p>Simulation result showing the angular distribution of the light passing through the bandpass filters for emitters with and without reflectors.</p>
Full article ">Figure 5
<p>Transmission spectrum of one fresh and one used WTO sample and illustration of the optical bandpass filters selected for the non-dispersive infrared (NDIR) sensor implementation.</p>
Full article ">Figure 6
<p>Assembled NDIR oil sensor prototype featuring 6 mm push-in hose connectors on either side and an electrical connector at the bottom of the sensor housing.</p>
Full article ">Figure 7
<p>Functioning principle of the method used to correct for the sloping baseline of the infrared absorption spectrum.</p>
Full article ">Figure 8
<p>(<b>a</b>) Model validation for the oxidation measurement of the WTO samples; (<b>b</b>) Model validation for the AN measurement of the WTO samples; (<b>c</b>) Model validation for the AN measurement of the MGO samples; (<b>d</b>) Model validation for the water content measurement of the AGO samples. RMSE: root mean square error.</p>
Full article ">
5461 KiB  
Article
Formation and Applications of the Secondary Fiber Bragg Grating
by Bai-Ou Guan, Yang Ran, Fu-Rong Feng and Long Jin
Sensors 2017, 17(2), 398; https://doi.org/10.3390/s17020398 - 18 Feb 2017
Cited by 13 | Viewed by 5808
Abstract
Being one of the most proven fiber optic devices, the fiber Bragg grating has developed continually to extend its applications, particularly in extreme environments. Accompanying the growth of Type-IIa Bragg gratings in some active fibers, a new resonance appears at the shorter wavelength. [...] Read more.
Being one of the most proven fiber optic devices, the fiber Bragg grating has developed continually to extend its applications, particularly in extreme environments. Accompanying the growth of Type-IIa Bragg gratings in some active fibers, a new resonance appears at the shorter wavelength. This new type of grating was named “secondary Bragg grating” (SBG). This paper describes the formation and applications of the SBGs. The formation of the SBG is attributed to the intracore Talbot-type-fringes as a result of multi-order diffractions of the inscribing beams. The SBG presents a variety of interesting characteristics, including dip merge, high-temperature resistance, distinct temperature response, and the strong higher-order harmonic reflection. These features enable its promising applications in fiber lasers and fiber sensing technology. Full article
(This article belongs to the Special Issue Recent Advances in Fiber Bragg Grating Sensing)
Show Figures

Figure 1

Figure 1
<p>Experimental setup for FBG inscription; BBS: broadband source; OSA: optical spectrum analyzer.</p>
Full article ">Figure 2
<p>Evolution of the Type-IIa grating and the secondary grating.</p>
Full article ">Figure 3
<p>Variation of the indices throughout the grating inscription.</p>
Full article ">Figure 4
<p>Schematic of the interleaving index-modulation-structures due to the Talbot-type-fringes.</p>
Full article ">Figure 5
<p>Schematic of the DBR laser structure. Inset: the photograph of the DBR laser. WDM: wavelength division multiplexer.</p>
Full article ">Figure 6
<p>Spectra of the F-P interference and DBR laser output.</p>
Full article ">Figure 7
<p>(<b>a</b>) Measured output spectra of the laser at different temperatures; and (<b>b</b>) lasing wavelength versus temperature.</p>
Full article ">Figure 8
<p>Stability test result of the laser at 600 °C for 2 h.</p>
Full article ">Figure 9
<p>Measured transmission spectra of a two-dip grating at different temperatures.</p>
Full article ">Figure 10
<p>Pump threshold and the thermal trigger (<b>left</b>) of the two-dip grating based DBR fiber laser (<b>right</b>).</p>
Full article ">Figure 11
<p>Spectral evolution of the “secondary-Type-IIa” grating at 1 μm and 1.5 μm bands.</p>
Full article ">Figure 12
<p>Changes of the power and wavelength of the laser versus pump power. Inset: the laser spectrum at pump power of 40 mW.</p>
Full article ">Figure 13
<p>Laser sensitivity of the wavelength responding to the (<b>a</b>) strain or (<b>b</b>) temperature.</p>
Full article ">Figure 14
<p>Long-term stability of the lasing wavelength at different temperatures.</p>
Full article ">
3023 KiB  
Article
UW Imaging of Seismic-Physical-Models in Air Using Fiber-Optic Fabry-Perot Interferometer
by Qiangzhou Rong, Yongxin Hao, Ruixiang Zhou, Xunli Yin, Zhihua Shao, Lei Liang and Xueguang Qiao
Sensors 2017, 17(2), 397; https://doi.org/10.3390/s17020397 - 17 Feb 2017
Cited by 20 | Viewed by 5418
Abstract
A fiber-optic Fabry-Perot interferometer (FPI) has been proposed and demonstrated for the ultrasound wave (UW) imaging of seismic-physical models. The sensor probe comprises a single mode fiber (SMF) that is inserted into a ceramic tube terminated by an ultra-thin gold film. The probe [...] Read more.
A fiber-optic Fabry-Perot interferometer (FPI) has been proposed and demonstrated for the ultrasound wave (UW) imaging of seismic-physical models. The sensor probe comprises a single mode fiber (SMF) that is inserted into a ceramic tube terminated by an ultra-thin gold film. The probe performs with an excellent UW sensitivity thanks to the nanolayer gold film, and thus is capable of detecting a weak UW in air medium. Furthermore, the compact sensor is a symmetrical structure so that it presents a good directionality in the UW detection. The spectral band-side filter technique is used for UW interrogation. After scanning the models using the sensing probe in air, the two-dimensional (2D) images of four physical models are reconstructed. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Scheme diagram of UW sensor structure; (<b>b</b>) interference spectrum of sensor; (<b>c</b>–<b>e</b>) sensor fabrication process; (<b>f</b>) image of sensor before packaging protection.</p>
Full article ">Figure 2
<p>Schematic configuration of experiment setup for UW imaging, insets contain (<b>a</b>) photograph of sensor, and (<b>b</b>) schematic diagram of scanning imaging for physical models.</p>
Full article ">Figure 3
<p>Time domain spectra of (<b>a</b>) 300 kHz and (<b>b</b>) 1 MHz UW versus increasing distances between PZT and sensor.</p>
Full article ">Figure 4
<p>Frequency domain spectra of UW at the frequencies (<b>a</b>) 300 kHz; (<b>b</b>) 1 MHz.</p>
Full article ">Figure 5
<p>Experiment measurements: (<b>a</b>,<b>b</b>) UW power of 300 kHz and 1 MHz versus increasing distances at the fixed emission voltage; (<b>c</b>,<b>d</b>) UW power of 300 kHz and 1 MHz versus increasing voltage at a fixed distance.</p>
Full article ">Figure 6
<p>300 kHz UW power versus temperature change.</p>
Full article ">Figure 7
<p>Photographs of two physical models, (<b>a</b>) tilt rectangular bulk; (<b>b</b>) rectangular hole in a bulk.</p>
Full article ">Figure 8
<p>Images of physical models: (<b>a</b>) tilt rectangular bulk; (<b>b</b>) rectangular hole in a bulk.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop