Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 15, October
Previous Issue
Volume 15, August
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 15, Issue 9 (September 2015) – 183 articles , Pages 20945-24680

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
1439 KiB  
Article
FRET-Based Nanobiosensors for Imaging Intracellular Ca2+ and H+ Microdomains
by Alsu I. Zamaleeva, Guillaume Despras, Camilla Luccardini, Mayeul Collot, Michel De Waard, Martin Oheim, Jean-Maurice Mallet and Anne Feltz
Sensors 2015, 15(9), 24662-24680; https://doi.org/10.3390/s150924662 - 23 Sep 2015
Cited by 14 | Viewed by 8496
Abstract
Semiconductor nanocrystals (NCs) or quantum dots (QDs) are luminous point emitters increasingly being used to tag and track biomolecules in biological/biomedical imaging. However, their intracellular use as highlighters of single-molecule localization and nanobiosensors reporting ion microdomains changes has remained a major challenge. Here, [...] Read more.
Semiconductor nanocrystals (NCs) or quantum dots (QDs) are luminous point emitters increasingly being used to tag and track biomolecules in biological/biomedical imaging. However, their intracellular use as highlighters of single-molecule localization and nanobiosensors reporting ion microdomains changes has remained a major challenge. Here, we report the design, generation and validation of FRET-based nanobiosensors for detection of intracellular Ca2+ and H+ transients. Our sensors combine a commercially available CANdot®565QD as an energy donor with, as an acceptor, our custom-synthesized red-emitting Ca2+ or H+ probes. These ‘Rubies’ are based on an extended rhodamine as a fluorophore and a phenol or BAPTA (1,2-bis(o-aminophenoxy)ethane-N,N,N′,N′-tetra-acetic acid) for H+ or Ca2+ sensing, respectively, and additionally bear a linker arm for conjugation. QDs were stably functionalized using the same SH/maleimide crosslink chemistry for all desired reactants. Mixing ion sensor and cell-penetrating peptides (that facilitate cytoplasmic delivery) at the desired stoichiometric ratio produced controlled multi-conjugated assemblies. Multiple acceptors on the same central donor allow up-concentrating the ion sensor on the QD surface to concentrations higher than those that could be achieved in free solution, increasing FRET efficiency and improving the signal. We validate these nanosensors for the detection of intracellular Ca2+ and pH transients using live-cell fluorescence imaging. Full article
(This article belongs to the Special Issue FRET Biosensors)
Show Figures

Figure 1

Figure 1
<p>FRET-based red-emitting ion sensors: (<b>A</b>) Principle. Coupling a green-emitting quantum dot (QD: here CANdot<sup>®</sup>565) donor to a red-emitting rhodamine-based ion sensor, the acceptor, produces an analyte-dependent FRET signal upon donor excitation at 405 nm. The custom red-emitting ion sensors used here are Ca<sup>2+</sup> (alternatively H<sup>+</sup>) indicators, the emission of which is quenched by PET in absence of their ligand. Analyte binding results in a strong fluorescence peaking at 602 nm; (<b>B</b>–<b>D</b>) Chemical structure of the sensors: All sensors are built on an extended rhodamine moiety (blue). The two Ca<sup>2+</sup> sensor families incorporate a BAPTA moiety (green), without (<b>B</b>) and with (<b>C</b>) an oxygen introduced on one of the aromatic ring of the BAPTA for the lower and higher affinity families: CaRuby1 (µM-mM range) and CaRuby2 (sub-µM range), respectively. Substitutions (Z<sub>1</sub>, Z<sub>2</sub> in red with Z<sub>1</sub>=Cl, Z<sub>2</sub>=H for the chloride derivatives, and Z<sub>1</sub>=H, Z<sub>2</sub>=F for the fluorine derivatives, Z<sub>1</sub>=H, Z<sub>2</sub>=Me for CaRu1-Me) yield compounds with a finely tunable K<sub>D</sub> for Ca<sup>2+</sup> binding. The pH sensor family (HRubies, (<b>D</b>)) is based on the addition of a phenol instead of a BAPTA. Note that all compounds bear an azido/alkyne side arm for click chemistry and the resulting potential for high-yield coupling reactions. The azide bearing linker is introduced in the bridge between the two aromatic rings of the BAPTA for the CaRubies1, and on the additional oxygen for the CaRubies2. HR-PiAC bears an alkyne moiety at the ortho position of the phenol through a piperazine carbamate link; (<b>E</b>–<b>G</b>) Spectral properties of retained donor/acceptor pairs (<b>E</b>) normalized absorbance and emission spectra (dashed and plain lines, respectively) of QD565 (green) and CaRu-Me (red). Since there is only a slight Ca<sup>2+</sup> sensitivity of CaRu-Me absorbance when switching from EGTA- to 2 mM Ca<sup>2+</sup>-containing solution, the K<sub>D</sub>s of CaRu-Me and QDCaRu-Me were similar, as expected. Similar properties are expected with CaRu2-F and HR-PiAC since their absorbance is respectively Ca<sup>2+</sup> and pH insensitive, as illustrated in panels (<b>F</b>) and (<b>G</b>), where blue and red traces are in absence and presence of the analyte, respectively.</p>
Full article ">Figure 2
<p>Assembly of FRET-based ion sensors. CANdots have a CdSe core surrounded by a double layer of CdZn and ZnS. The QD TOP/TOPO (Step 1) passivating layer was replaced by a hydrophilic coating peptide made by mixing 50% cysteine-(SH function) and 50% lysine-(NH<sub>2</sub> function) terminated peptides (Cys-peptide: Ac-CGSESGGSESG(FCC)<sub>3</sub>F-amide and Lys-peptide: NH<sub>2</sub>-KGSESGGSESG(FCC)<sub>3</sub>F-amide, respectively). Independently, an azide/alkyne-terminated ligand was bound to a clickable NH<sub>2</sub>-PEG to form a PEGylated dye (Step 2). Nanoparticles were then functionalized by adding the pegylated rhodamine-based sensor (red dots) (Step 3) using a SH/NH<sub>2</sub> crosslinking reaction. Other NH<sub>2</sub> terminated ligands can be added using the same crosslinking reaction and are included in stoichiometric ratio (not shown).</p>
Full article ">Figure 3
<p>FRET between a QD565 donor and rhodamine-based ion-sensing acceptor fluorophores. FRET was measured as donor quenching upon QD excitation at 407 nm and acceptor sensitization, as a function of A/D ratio. Experiments were performed at an elevated analyte concentration (500 µM) to fully relieve the PET quenching of the dye. (<b>A</b>) Series of mixed spectra obtained by increasing the number of PEG5kDa -CaRuby2-F while keeping QD concentration constant (40 nM), in saturating Ca<sup>2+</sup>, 500 µM. The A/D ratio was determined by absorbance measurements at 407 and 581 nm, to evaluate QD565 and CaRuby concentrations, respectively; (<b>B</b>) Relative donor quenching (QD photoluminescence, red) and FRET efficiency (black) are reported after linear unmixing of donor and acceptor spectra, see Supplementary Material 6. Green data points shows acceptor sensitization (F<sub>CaRuby</sub> (exc. @ 350 nm) − F<sub>QD</sub> (exc. @ 350 nm)/(F<sub>CaRuby</sub> (exc. @ 535 nm)); (<b>C</b>,<b>D</b>) Similar experiments were carried with QD-HR-PiAC, with the pH adjusted to 4. In both cases, symbols ▲, ● show results from two independent runs.</p>
Full article ">Figure 4
<p>(<b>A</b>) Fluorometric titration of FRET-based nanobiosensor assemblies: QD-PEG5kDa-CaRu2F (prepared with A/D ratio= 7.4, and using the Invitrogen Ca<sup>2+</sup> buffer kit to adjust [Ca<sup>2+</sup>]); Spectra (<b>left</b>) were obtained when following Ca<sup>2+</sup> concentrations were successively applied: 1, 17, 38, 65, 100, 150, 225, 351, 602 nM and 1.35 and 39 µM from bottom to top traces; (<b>right</b>) Resulting titration curves using direct excitation at 545 nm or FRET excitation upon QDs excitation at 407-nm; (<b>B</b>) QD-PEG5kDa -HR-PiAC (A/D ratio= 5.6), universal pH buffer, see Supplementary 3, p. 830 in [<a href="#B20-sensors-15-24662" class="html-bibr">20</a>]). Fluorescence curves are corrected for the pH sensitivity of the QDs fluorescence (see <a href="#sensors-15-24662-s001" class="html-supplementary-material">Figure S3</a> for details). Spectra shown on the left correspond to pH 6, 7.45, 7.65, 8.3, 8.9, 10.3, 11.45 and 11.9 from top to bottom, respectively. On the right: measured <span class="html-italic">K</span><sub>D</sub>s are similar when carrying fluorometric titration either by direct excitation (at 545 nm) or by FRET excitation (at 407 nm).</p>
Full article ">Figure 5
<p>Live-cell imaging of intracellular ion distributions. (<b>A</b>–<b>C</b>) Confocal images of BHK-21 cells after incubation with QD-HRu PiAC (<b>A</b>) co-stained by Lysotracker Green; (<b>B</b>) and a merged image of the two channels (<b>C</b>). Scale bar is 20 µm; (<b>D</b>–<b>G</b>) Intracellular calibration of pH sensors in a suspension of BHK-21 cells using flow cytometry. Typical histograms showing the internalization of the QD565/H-Ruby pH sensors according to the H-Ruby intensity (in grey, the cell without sensors, and in red, the cells having internalized pH sensors) upon direct (<b>D</b>) and by FRET excitation (<b>F</b>). Calibration curves of the intracellular pH sensors clamped at different pH with the ionophore nigericin were obtained by direct (<b>E</b>) and by FRET excitation (<b>G</b>) of H-Ruby. Intracellular pH as measured by the red histograms of fluorescence H-Ruby in (<b>D</b>,<b>F</b>) was evaluated (black dots) by extrapolation on the calibration curves (<b>E</b>,<b>G</b>), respectively; (<b>H</b>,<b>I</b>) Read-out of local Ca<sup>2+</sup> transients upon glutamate application on BHK cells stably expressing the NR<sub>1</sub> and NR<sub>2</sub>A subunits of the N-methyl-D-aspartate receptors (NMDARs). (<b>H</b>) Firstly internalized biosensors (QD-CaRuby-CPP in a ratio 1:10:10) were localized by superimposition of the bright-field and time-averaged TIRF image of cultured BHK cells detected in the green channel upon 405-nm evanescent-wave excitation of the QDs. (<b>I</b>) Repetitive stimulation evokes reversible Ca<sup>2+</sup> transients. Superimposed traces obtained in the red channel following 568-nm excitation are responses of the Ca<sup>2+</sup> sensor to two successive bath applications of NMDAR agonists at a saturating concentration, with a 15 min long continuous bath perfusion of control saline for recovery from desensitization in between. <a href="#sensors-15-24662-f005" class="html-fig">Figure 5</a>H,I is reprinted with permission from Zamaleeva <span class="html-italic">et al.</span>, 2014 [<a href="#B8-sensors-15-24662" class="html-bibr">8</a>]. Copyright 2015 American Chemical Society.</p>
Full article ">Figure 6
<p>Twisted-intramolecular charge transfer in CaRuby (at the <b>left</b>) and HRuby (at the <b>right</b>) dyes.</p>
Full article ">
1230 KiB  
Article
A Location Method Using Sensor Arrays for Continuous Gas Leakage in Integrally Stiffened Plates Based on the Acoustic Characteristics of the Stiffener
by Xu Bian, Yibo Li, Hao Feng, Jiaqiang Wang, Lei Qi and Shijiu Jin
Sensors 2015, 15(9), 24644-24661; https://doi.org/10.3390/s150924644 - 23 Sep 2015
Cited by 16 | Viewed by 5882
Abstract
This paper proposes a continuous leakage location method based on the ultrasonic array sensor, which is specific to continuous gas leakage in a pressure container with an integral stiffener. This method collects the ultrasonic signals generated from the leakage hole through the piezoelectric [...] Read more.
This paper proposes a continuous leakage location method based on the ultrasonic array sensor, which is specific to continuous gas leakage in a pressure container with an integral stiffener. This method collects the ultrasonic signals generated from the leakage hole through the piezoelectric ultrasonic sensor array, and analyzes the space-time correlation of every collected signal in the array. Meanwhile, it combines with the method of frequency compensation and superposition in time domain (SITD), based on the acoustic characteristics of the stiffener, to obtain a high-accuracy location result on the stiffener wall. According to the experimental results, the method successfully solves the orientation problem concerning continuous ultrasonic signals generated from leakage sources, and acquires high accuracy location information on the leakage source using a combination of multiple sets of orienting results. The mean value of location absolute error is 13.51 mm on the one-square-meter plate with an integral stiffener (4 mm width; 20 mm height; 197 mm spacing), and the maximum location absolute error is generally within a ±25 mm interval. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Locating principle diagram.</p>
Full article ">Figure 2
<p>The model of the sensor array.</p>
Full article ">Figure 3
<p>The model of ultrasonic propagation in an integral stiffener.</p>
Full article ">Figure 4
<p>Angle-power relation.</p>
Full article ">Figure 5
<p>Algorithm flow chart.</p>
Full article ">Figure 6
<p>Experimental apparatus.</p>
Full article ">Figure 7
<p>The schematic diagram.</p>
Full article ">Figure 8
<p>The signal propagate diagram (at 0.15392 ms).</p>
Full article ">Figure 9
<p>The energy space distribution diagram around the stiffener.</p>
Full article ">Figure 10
<p>The transmission coefficients <span class="html-italic">H</span>(<span class="html-italic">f</span>).</p>
Full article ">Figure 11
<p>The test plate.</p>
Full article ">Figure 12
<p>Experimental apparatus.</p>
Full article ">Figure 13
<p>The 8-sensor array.</p>
Full article ">Figure 14
<p>The time-frequency diagram of leakage signal. (<b>a</b>) Time domain diagram; (<b>b</b>) Frequency domain diagram.</p>
Full article ">Figure 15
<p>The orientation results (<b>a</b>) under <span class="html-italic">T<sub>1</sub></span> and using <span class="html-italic">H(f)</span>; (<b>b</b>) under <span class="html-italic">T<sub>1</sub></span> and not using <span class="html-italic">H(f)</span>; and (<b>c</b>) under <span class="html-italic">T<sub>2</sub></span> and using <span class="html-italic">H(f)</span>.</p>
Full article ">Figure 16
<p>Orientation error.</p>
Full article ">Figure 17
<p>Location error.</p>
Full article ">
11865 KiB  
Article
A New Approach for Combining Time-of-Flight and RGB Cameras Based on Depth-Dependent Planar Projective Transformations
by Carlota Salinas, Roemi Fernández, Héctor Montes and Manuel Armada
Sensors 2015, 15(9), 24615-24643; https://doi.org/10.3390/s150924615 - 23 Sep 2015
Cited by 13 | Viewed by 6482
Abstract
Image registration for sensor fusion is a valuable technique to acquire 3D and colour information for a scene. Nevertheless, this process normally relies on feature-matching techniques, which is a drawback for combining sensors that are not able to deliver common features. The combination [...] Read more.
Image registration for sensor fusion is a valuable technique to acquire 3D and colour information for a scene. Nevertheless, this process normally relies on feature-matching techniques, which is a drawback for combining sensors that are not able to deliver common features. The combination of ToF and RGB cameras is an instance that problem. Typically, the fusion of these sensors is based on the extrinsic parameter computation of the coordinate transformation between the two cameras. This leads to a loss of colour information because of the low resolution of the ToF camera, and sophisticated algorithms are required to minimize this issue. This work proposes a method for sensor registration with non-common features and that avoids the loss of colour information. The depth information is used as a virtual feature for estimating a depth-dependent homography lookup table (Hlut). The homographies are computed within sets of ground control points of 104 images. Since the distance from the control points to the ToF camera are known, the working distance of each element on the Hlut is estimated. Finally, two series of experimental tests have been carried out in order to validate the capabilities of the proposed method. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Sensory system configuration. The sensory rig consists of a ToF camera and a RGB camera, and it is mounted on a robotic platform with four degrees of freedom.</p>
Full article ">Figure 2
<p>Plane induced parallax.</p>
Full article ">Figure 3
<p>Plane projective transformation induced by two planes <math display="inline"> <semantics> <mrow> <msub> <mi>π</mi> <mn>1</mn> </msub> </mrow> </semantics> </math> and<math display="inline"> <semantics> <mrow> <mo> </mo> <msub> <mi>π</mi> <mn>2</mn> </msub> </mrow> </semantics> </math> on a scene.</p>
Full article ">Figure 4
<p>Formation of the depth-dependent homography lookup table.</p>
Full article ">Figure 5
<p>Samples of images pair of the pattern grid board. (<b>a</b>) RGB and ToF amplitude images; (<b>b</b>) the depth map representation in the Cartesian system of the inner 2 × 3 grid.</p>
Full article ">Figure 6
<p>Geometric error evaluation. (<b>a</b>) geometric error; (<b>b</b>) distance error; (<b>c</b>) error distribution in <span class="html-italic">u</span>-axis; (<b>d</b>) error distribution in <span class="html-italic">v</span>-axis.</p>
Full article ">Figure 7
<p>Depth measurements evaluation. (<b>a</b>) distances from the pattern board to the cameras; (<b>b</b>) differential value of raw and filtered data; (<b>c</b>) samples mean depth <span class="html-italic">vs</span>. overall error; (<b>d</b>) samples maximum variation vs overall error.</p>
Full article ">Figure 8
<p>Image sample 49-pattern board positioned at 527 mm. (<b>a</b>) Top: selected ROI on the ToF image. Bottom: zoom of the selected points on the ToF image; (<b>b</b>) Top: mapped points on the RGB image. Bottom: zoom of the estimated points on the RGB image; (<b>c</b>) Top: ROI of the ToF image. Bottom: Composition ROI from the mapped points on the RGB image; (<b>d</b>) colour depth map.</p>
Full article ">Figure 9
<p>Image sample 25-pattern board positioned at 527 mm. (<b>a</b>) Top: selected ROI on the ToF image. Bottom: zoom of the selected points on the ToF image; (<b>b</b>) Top: mapped points on the RGB image. Bottom: zoom of the estimated points on the RGB image; (<b>c</b>) Top: ROI of the ToF image. Bottom: Composition ROI from the mapped points on the RGB image; (<b>d</b>) colour depth map.</p>
Full article ">Figure 10
<p>Image sample 49. (<b>a</b>) Top: mapped points on the ToF image. Bottom: points of the ROI on the RGB image; (<b>b</b>) high resolution colour depth map; Image sample 25; (<b>c</b>) Top: mapped points on the ToF image. Bottom: points of the ROI on the RGB image; (<b>d</b>) high resolution colour depth map.</p>
Full article ">Figure 10 Cont.
<p>Image sample 49. (<b>a</b>) Top: mapped points on the ToF image. Bottom: points of the ROI on the RGB image; (<b>b</b>) high resolution colour depth map; Image sample 25; (<b>c</b>) Top: mapped points on the ToF image. Bottom: points of the ROI on the RGB image; (<b>d</b>) high resolution colour depth map.</p>
Full article ">Figure 11
<p>Image sample 19. (<b>a</b>) selected points on the ToF; (<b>b</b>) estimated points on the RGB image; (<b>c</b>) Top: selected ToF ROI. Bottom: estimated RGB ROI; (<b>d</b>) colour depth map of the ROI.</p>
Full article ">Figure 12
<p>Image sample 25. (<b>a</b>) selected points on the ToF; (<b>b</b>) estimated points on the RGB image; (<b>c</b>) Top: selected ToF ROI. Bottom: estimated RGB ROI; (<b>d</b>) colour depth map of the ROI.</p>
Full article ">Figure 12 Cont.
<p>Image sample 25. (<b>a</b>) selected points on the ToF; (<b>b</b>) estimated points on the RGB image; (<b>c</b>) Top: selected ToF ROI. Bottom: estimated RGB ROI; (<b>d</b>) colour depth map of the ROI.</p>
Full article ">Figure 13
<p>Image sample 22. (<b>a</b>) Selected points on the ToF; (<b>b</b>) estimated points on the RGB image; (<b>c</b>) Top: selected ToF ROI. Bottom: estimated RGB ROI; (<b>d</b>) the colour depth map of the ROI.</p>
Full article ">Figure 14
<p>Image sample 33. (<b>a</b>) Selected points on the ToF; (<b>b</b>) estimated points on the RGB image; (<b>c</b>) Top: selected ToF ROI. Bottom: estimated RGB ROI; (<b>d</b>) the colour depth map of the ROI.</p>
Full article ">Figure 15
<p>Error evaluation of the experimental tests corresponding to group #1. (<b>a</b>) geometric error; (<b>b</b>) distance error; (<b>c</b>) error distribution in <span class="html-italic">u</span>-axis; (<b>d</b>) error distribution in <span class="html-italic">v</span>-axis.</p>
Full article ">Figure 16
<p>Error evaluation of the experimental tests corresponding to group #2. (<b>a</b>) geometric error; (<b>b</b>) distance error; (<b>c</b>) error distribution in <span class="html-italic">u</span>-axis; (<b>d</b>) error distribution in <span class="html-italic">v</span>-axis.</p>
Full article ">Figure 17
<p>High resolution colour depth map reconstruction. (<b>a</b>) two volumetric objects placed at different distances from the sensory system; (<b>b</b>) an object with a large relief; (<b>c</b>) two curved objects; (<b>d</b>) a continuous surface which is slanted with respect to the cameras axis.</p>
Full article ">
2020 KiB  
Article
Integrated WiFi/PDR/Smartphone Using an Unscented Kalman Filter Algorithm for 3D Indoor Localization
by Guoliang Chen, Xiaolin Meng, Yunjia Wang, Yanzhe Zhang, Peng Tian and Huachao Yang
Sensors 2015, 15(9), 24595-24614; https://doi.org/10.3390/s150924595 - 23 Sep 2015
Cited by 120 | Viewed by 11752
Abstract
Because of the high calculation cost and poor performance of a traditional planar map when dealing with complicated indoor geographic information, a WiFi fingerprint indoor positioning system cannot be widely employed on a smartphone platform. By making full use of the hardware sensors [...] Read more.
Because of the high calculation cost and poor performance of a traditional planar map when dealing with complicated indoor geographic information, a WiFi fingerprint indoor positioning system cannot be widely employed on a smartphone platform. By making full use of the hardware sensors embedded in the smartphone, this study proposes an integrated approach to a three-dimensional (3D) indoor positioning system. First, an improved K-means clustering method is adopted to reduce the fingerprint database retrieval time and enhance positioning efficiency. Next, with the mobile phone’s acceleration sensor, a new step counting method based on auto-correlation analysis is proposed to achieve cell phone inertial navigation positioning. Furthermore, the integration of WiFi positioning with Pedestrian Dead Reckoning (PDR) obtains higher positional accuracy with the help of the Unscented Kalman Filter algorithm. Finally, a hybrid 3D positioning system based on Unity 3D, which can carry out real-time positioning for targets in 3D scenes, is designed for the fluent operation of mobile terminals. Full article
(This article belongs to the Special Issue Sensors for Indoor Mapping and Navigation)
Show Figures

Figure 1

Figure 1
<p>Positioning process.</p>
Full article ">Figure 2
<p>Distribution of sampling and test points.</p>
Full article ">Figure 3
<p>Clustering results.</p>
Full article ">Figure 4
<p>Single-point positioning time.</p>
Full article ">Figure 5
<p>Single-point positioning average error.</p>
Full article ">Figure 6
<p>Single-point positioning maximum error.</p>
Full article ">Figure 7
<p>PDR principle.</p>
Full article ">Figure 8
<p>Schematic of coordinate axes.</p>
Full article ">Figure 9
<p>Three-axis acceleration of a mobile phone.</p>
Full article ">Figure 10
<p>Distribution of the standard deviation of the overall acceleration in stationary and walking states.</p>
Full article ">Figure 11
<p>Distribution of auto-correlation during stationary and walking states.</p>
Full article ">Figure 12
<p>UKF process.</p>
Full article ">Figure 13
<p>UKF positioning test.</p>
Full article ">Figure 14
<p>UKF positioning comparison test.</p>
Full article ">Figure 15
<p>Structure of the 3D indoor positioning system.</p>
Full article ">Figure 16
<p>Indoor positioning test field.</p>
Full article ">Figure 17
<p>Interface of 3D-indoor Positioning System.</p>
Full article ">
807 KiB  
Article
Location Dependence of Mass Sensitivity for Acoustic Wave Devices
by Kewei Zhang, Yuesheng Chai and Z.-Y. Cheng
Sensors 2015, 15(9), 24585-24594; https://doi.org/10.3390/s150924585 - 23 Sep 2015
Cited by 5 | Viewed by 4435
Abstract
It is introduced that the mass sensitivity (Sm) of an acoustic wave (AW) device with a concentrated mass can be simply determined using its mode shape function: the Sm is proportional to the square of its mode shape. By [...] Read more.
It is introduced that the mass sensitivity (Sm) of an acoustic wave (AW) device with a concentrated mass can be simply determined using its mode shape function: the Sm is proportional to the square of its mode shape. By using the Sm of an AW device with a uniform mass, which is known for almost all AW devices, the Sm of an AW device with a concentrated mass at different locations can be determined. The method is confirmed by numerical simulation for one type of AW device and the results from two other types of AW devices. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Simulated <span class="html-italic">S<sub>m</sub></span> (solid squares) for the concentrated mass load at different locationsm (<span class="html-italic">x<sub>c</sub></span>) for the MSP operated at the fundamnetal resonant mode. The parameters of the MSP is listed in <a href="#sensors-15-24585-t001" class="html-table">Table 1</a>. The numerical simulation is done for a mass load of <span class="html-italic">M* =</span> Δ<span class="html-italic">m/M</span> = 10<sup>−5</sup>. The solid line is the fitting curve.</p>
Full article ">Figure 2
<p>Mass sensitivity (<span class="html-italic">S<sub>m</sub></span>) of a cantilever with a concentrated mass <span class="html-italic">versus</span> the location (<span class="html-italic">x</span>) of the mass, where <span class="html-italic">S<sub>m</sub></span> is calculated using the methodology introduced here. The <span class="html-italic">y</span>-axis is normalized as <math display="inline"> <semantics> <mrow> <msub> <mi>S</mi> <mi>m</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>/</mo> <msubsup> <mi>S</mi> <mi>m</mi> <mrow> <mi>u</mi> <mi>n</mi> <mi>i</mi> </mrow> </msubsup> </mrow> </semantics> </math>, where <math display="inline"> <semantics> <mrow> <msubsup> <mi>S</mi> <mi>m</mi> <mrow> <mi>u</mi> <mi>n</mi> <mi>i</mi> </mrow> </msubsup> </mrow> </semantics> </math> is defined by Equations (1) and (10).</p>
Full article ">
705 KiB  
Article
Effect of Electrode Configuration on Nitric Oxide Gas Sensor Behavior
by Ling Cui and Erica P. Murray
Sensors 2015, 15(9), 24573-24584; https://doi.org/10.3390/s150924573 - 23 Sep 2015
Cited by 7 | Viewed by 6615
Abstract
The influence of electrode configuration on the impedancemetric response of nitric oxide (NO) gas sensors was investigated for solid electrochemical cells [Au/yttria-stabilized zirconia (YSZ)/Au)]. Fabrication of the sensors was carried out at 1050 °C in order to establish a porous YSZ electrolyte that [...] Read more.
The influence of electrode configuration on the impedancemetric response of nitric oxide (NO) gas sensors was investigated for solid electrochemical cells [Au/yttria-stabilized zirconia (YSZ)/Au)]. Fabrication of the sensors was carried out at 1050 °C in order to establish a porous YSZ electrolyte that enabled gas diffusion. Two electrode configurations were studied where Au wire electrodes were either embedded within or wrapped around the YSZ electrolyte. The electrical response of the sensors was collected via impedance spectroscopy under various operating conditions where gas concentrations ranged from 0 to 100 ppm NO and 1%–18% O2 at temperatures varying from 600 to 700 °C. Gas diffusion appeared to be a rate-limiting mechanism in sensors where the electrode configuration resulted in longer diffusion pathways. The temperature dependence of the NO sensors studied was independent of the electrode configuration. Analysis of the impedance data, along with equivalent circuit modeling indicated the electrode configuration of the sensor effected gas and ionic transport pathways, capacitance behavior, and NO sensitivity. Full article
(This article belongs to the Special Issue Gas Sensors—Designs and Applications)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Diagrams of the NO sensors with electrode configurations (<b>a</b>) EC1 and EC2; (<b>b</b>) Cross sectional view illustrating the placement of the Au electrodes within the YSZ tape layers composing the sensor.</p>
Full article ">Figure 2
<p>Typical Nyquist plot showing the impedance response of EC1 and EC2 sensors at an operating temperature of 650 °C, in 10.5% O<sub>2</sub> with and without NO present. The equivalent circuit used to model the data is shown in the upper right corner of the plot. The model that fit the data is described by the solid lines.</p>
Full article ">Figure 3
<p>The capacitance <span class="html-italic">versus</span> NO concentration is shown for the (<b>a</b>) high and (<b>b</b>) low frequency arcs associated with the EC1 and EC2 sensors.</p>
Full article ">Figure 4
<p>The change in capacitance is plotted for EC1 and EC2 sensors for NO concentrations ranging from 5 to 100 ppm.</p>
Full article ">Figure 5
<p>The change in the angular phase angle is plotted for EC1 and EC2 sensors for NO concentrations ranging from 5 to 100 ppm.</p>
Full article ">Figure 6
<p>The oxygen partial pressure dependence of the EC1 and EC2 sensors is shown for data collected at 650 °C for oxygen concentrations ranging from 1% to 18%.</p>
Full article ">Figure 7
<p>The temperature dependence is described by the Arrhenius plot for the EC1 and EC2 sensors along with corresponding activation energies.</p>
Full article ">
892 KiB  
Article
Wireless Power Transfer for Autonomous Wearable Neurotransmitter Sensors
by Cuong M. Nguyen, Pavan Kumar Kota, Minh Q. Nguyen, Souvik Dubey, Smitha Rao, Jeffrey Mays and J.-C. Chiao
Sensors 2015, 15(9), 24553-24572; https://doi.org/10.3390/s150924553 - 23 Sep 2015
Cited by 20 | Viewed by 8079
Abstract
In this paper, we report a power management system for autonomous and real-time monitoring of the neurotransmitter L-glutamate (L-Glu). A low-power, low-noise, and high-gain recording module was designed to acquire signal from an implantable flexible L-Glu sensor fabricated by micro-electro-mechanical system (MEMS)-based processes. [...] Read more.
In this paper, we report a power management system for autonomous and real-time monitoring of the neurotransmitter L-glutamate (L-Glu). A low-power, low-noise, and high-gain recording module was designed to acquire signal from an implantable flexible L-Glu sensor fabricated by micro-electro-mechanical system (MEMS)-based processes. The wearable recording module was wirelessly powered through inductive coupling transmitter antennas. Lateral and angular misalignments of the receiver antennas were resolved by using a multi-transmitter antenna configuration. The effective coverage, over which the recording module functioned properly, was improved with the use of in-phase transmitter antennas. Experimental results showed that the recording system was capable of operating continuously at distances of 4 cm, 7 cm and 10 cm. The wireless power management system reduced the weight of the recording module, eliminated human intervention and enabled animal experimentation for extended durations. Full article
(This article belongs to the Special Issue Power Schemes for Biosensors and Biomedical Devices)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) Scanning electron microscopy (SEM) photo of the probe tip indicating two working electrodes (WE), two self-reference electrodes (SRE) and a reference electrode (RE) [<a href="#B4-sensors-15-24553" class="html-bibr">4</a>]; (<b>b</b>) A photo of the assembled devices with the probe lengths of 7 and 12 mm [<a href="#B4-sensors-15-24553" class="html-bibr">4</a>].</p>
Full article ">Figure 2
<p>Photo of the neurotransmitter sensor recording device with a dimension of 25 × 27 × 6 mm<sup>3</sup>. The module included multi-stage amplifiers and a system-on-chip (SoC) processor with an integrated radio-frequency (RF) transceiver.</p>
Full article ">Figure 3
<p>Block diagram of the wireless power harvester for the wearable neurotransmitter sensor module.</p>
Full article ">Figure 4
<p>Illustration of the experiment setup with a freely-moving rat wearing the neurotransmitter sensor module inside an acrylic box with a dimension of 40 × 40 × 15 cm<sup>3</sup>.</p>
Full article ">Figure 5
<p>Radiation patterns of the normal field components <span class="html-italic">H<sub>z</sub></span> generated by the transmitter (TX) spiral antenna at different distances <span class="html-italic">z</span> of (<b>a</b>) 4, (<b>b</b>) 7, (<b>c</b>) 10 and (<b>d</b>) 11 cm.</p>
Full article ">Figure 6
<p>Measured load voltages at different distances <span class="html-italic">z</span> of (<b>a</b>) 4; (<b>b</b>) 7; (<b>c</b>) 10 and (<b>d</b>) 11 cm when a load of 745 Ω was connected to the receiver side.</p>
Full article ">Figure 7
<p>Angular misalignment θ between the TX and receiver (RX) antennas.</p>
Full article ">Figure 8
<p>Measured load voltages at different distances <span class="html-italic">z</span> = 4, 7, 10 cm when a load of 745 Ω was connected to the receiver side. (<b>a</b>) The angle θ between the TX and RX antennas was 90°. Measured load voltages at the planes of <span class="html-italic">y</span> <span class="html-italic">=</span> 0 cm and (<b>b</b>) <span class="html-italic">z</span> <span class="html-italic">=</span> 4 cm; (<b>c</b>) <span class="html-italic">z =</span> 7 cm; and (<b>d</b>) <span class="html-italic">z</span> <span class="html-italic">=</span> 10 cm; with different angular misalignment θ <span class="html-italic">=</span> 0°, 30°, 60° and 90°.</p>
Full article ">Figure 9
<p>Two identical TX antennas were arranged 20 cm apart driven by the same power supply and identical amplifier circuitry.</p>
Full article ">Figure 10
<p>Simulation of the normal component of magnetic fields at a distance <span class="html-italic">z</span> = 4 cm from the two TX antennas when two antennas were (<b>a</b>) in-phase and (<b>b</b>) out-of-phase.</p>
Full article ">Figure 11
<p>Measured load voltages at different planes at <span class="html-italic">z</span> = 4, 7 and 10 cm when using two TX antennas were (<b>a</b>) in-phase and (<b>b</b>) out-of-phase.</p>
Full article ">Figure 12
<p>Simulation of the normal component of magnetic fields at a distance <span class="html-italic">z</span> = 4 cm from (<b>a</b>) three TX antennas; and (<b>b</b>) four TX antennas.</p>
Full article ">Figure 13
<p>(<b>a</b>) Electrical current response of the L-Glu sensor which were wirelessly recorded with the neurotransmitter sensor module powered by the WPT system; (<b>b</b>) Calibration curve shows a sensitivity of 1.7 pA/µM with standard deviation (SD) less than 6.7 pA.</p>
Full article ">
3496 KiB  
Article
Research on the Sensing Performance of the Tuning Fork-Probe as a Micro Interaction Sensor
by Fengli Gao and Xide Li
Sensors 2015, 15(9), 24530-24552; https://doi.org/10.3390/s150924530 - 23 Sep 2015
Cited by 12 | Viewed by 5759
Abstract
The shear force position system has been widely used in scanning near-field optical microscopy (SNOM) and recently extended into the force sensing area. The dynamic properties of a tuning fork (TF), the core component of this system, directly determine the sensing performance of [...] Read more.
The shear force position system has been widely used in scanning near-field optical microscopy (SNOM) and recently extended into the force sensing area. The dynamic properties of a tuning fork (TF), the core component of this system, directly determine the sensing performance of the shear positioning system. Here, we combine experimental results and finite element method (FEM) analysis to investigate the dynamic behavior of the TF probe assembled structure (TF-probe). Results from experiments under varying atmospheric pressures illustrate that the oscillation amplitude of the TF-probe is linearly related to the quality factor, suggesting that decreasing the pressure will dramatically increase the quality factor. The results from FEM analysis reveal the influences of various parameters on the resonant performance of the TF-probe. We compared numerical results of the frequency spectrum with the experimental data collected by our recently developed laser Doppler vibrometer system. Then, we investigated the parameters affecting spatial resolution of the SNOM and the dynamic response of the TF-probe under longitudinal and transverse interactions. It is found that the interactions in transverse direction is much more sensitive than that in the longitudinal direction. Finally, the TF-probe was used to measure the friction coefficient of a silica–silica interface. Full article
(This article belongs to the Section State-of-the-Art Sensors Technologies)
Show Figures

Figure 1

Figure 1
<p>The geometry of a TF.</p>
Full article ">Figure 2
<p>Setup of the frequency spectrum measurement system using an LDV and an electrical feedback mode.</p>
Full article ">Figure 3
<p>A typical frequency spectrum of a TF.</p>
Full article ">Figure 4
<p>Resonance frequency and <span class="html-italic">Q</span> factor of TF.</p>
Full article ">Figure 5
<p>Atmosphere pressure <span class="html-italic">vs.</span> the resonance frequency of the TF.</p>
Full article ">Figure 6
<p>Atmosphere pressure <span class="html-italic">vs.</span> the <span class="html-italic">Q</span> factor of the TF.</p>
Full article ">Figure 7
<p>Atmosphere pressure <span class="html-italic">vs.</span> the oscillation amplitude of the TF.</p>
Full article ">Figure 8
<p>Resonant oscillation amplitude <span class="html-italic">vs. Q</span> factor of the TF.</p>
Full article ">Figure 9
<p>Resonance frequency <span class="html-italic">vs.</span> atmospheric pressure of a TF-probe.</p>
Full article ">Figure 10
<p><span class="html-italic">Q</span> factor <span class="html-italic">vs.</span> atmospheric pressure of a TF-probe.</p>
Full article ">Figure 11
<p><span class="html-italic">Q</span> factor <span class="html-italic">vs.</span> the resonant oscillation amplitude of a TF-probe.</p>
Full article ">Figure 12
<p>(<b>a</b>) The geometry; and (<b>b</b>) the fourth mode shape of the TF with FEM analysis.</p>
Full article ">Figure 13
<p>Geometry and meshing of the TF-probe with FEM analysis.</p>
Full article ">Figure 14
<p>Nine types of TF-probe structures.</p>
Full article ">Figure 15
<p>A schematic of the TF-probe under longitudinal and transverse forces.</p>
Full article ">Figure 16
<p>The effect of the transverse viscous force on the oscillation amplitude of the probe tip.</p>
Full article ">Figure 17
<p>The frictional interaction between the silica–silica surfaces with varying normal pressures.</p>
Full article ">Figure 18
<p>Resonance frequency with varying probe diameters.</p>
Full article ">Figure 19
<p>Approach curves calculated by FEM and beam theory.</p>
Full article ">
720 KiB  
Article
Validation of Inter-Subject Training for Hidden Markov Models Applied to Gait Phase Detection in Children with Cerebral Palsy
by Juri Taborri, Emilia Scalona, Eduardo Palermo, Stefano Rossi and Paolo Cappa
Sensors 2015, 15(9), 24514-24529; https://doi.org/10.3390/s150924514 - 23 Sep 2015
Cited by 61 | Viewed by 7463
Abstract
Gait-phase recognition is a necessary functionality to drive robotic rehabilitation devices for lower limbs. Hidden Markov Models (HMMs) represent a viable solution, but they need subject-specific training, making data processing very time-consuming. Here, we validated an inter-subject procedure to avoid the intra-subject one [...] Read more.
Gait-phase recognition is a necessary functionality to drive robotic rehabilitation devices for lower limbs. Hidden Markov Models (HMMs) represent a viable solution, but they need subject-specific training, making data processing very time-consuming. Here, we validated an inter-subject procedure to avoid the intra-subject one in two, four and six gait-phase models in pediatric subjects. The inter-subject procedure consists in the identification of a standardized parameter set to adapt the model to measurements. We tested the inter-subject procedure both on scalar and distributed classifiers. Ten healthy children and ten hemiplegic children, each equipped with two Inertial Measurement Units placed on shank and foot, were recruited. The sagittal component of angular velocity was recorded by gyroscopes while subjects performed four walking trials on a treadmill. The goodness of classifiers was evaluated with the Receiver Operating Characteristic. The results provided a goodness from good to optimum for all examined classifiers (0 < G < 0.6), with the best performance for the distributed classifier in two-phase recognition (G = 0.02). Differences were found among gait partitioning models, while no differences were found between training procedures with the exception of the shank classifier. Our results raise the possibility of avoiding subject-specific training in HMM for gait-phase recognition and its implementation to control exoskeletons for the pediatric population. Full article
(This article belongs to the Special Issue Sensor Systems for Motion Capture and Interpretation)
Show Figures

Figure 1

Figure 1
<p>Sensor positioning on the subject. (<b>a</b>) Position of IMUs on participants’ lower limb; (<b>b</b>) position of foot-switches under participant’s instrumented foot (1: toe, 2: fifth metatarsophalangeal, 3: first metatarsophalangeal and 4: heel).</p>
Full article ">Figure 2
<p>Angular velocities of shank (red) and foot (blue) partitioned into two, four and six phases, relative to healthy children. FF is the flat foot, HO the heel off, HS the heel strike, IC the initial contact, LR the loading response, MS the mid stance, PS the pre swing, SP the stance phase, SW the swing and TS the terminal stance.</p>
Full article ">Figure 3
<p>Goodness (G) mean values and standard deviations (error bars) for typically developed children (TD) and children with hemiplegia (HC) in the four walking conditions (L1.0, L1.5, I1.0, I1.5) computed with scalar and distributed classifiers and with the two training procedures (SST and SPT). Statistical difference between: (i) the SST and SPT procedures are marked with <b>*</b>; (ii) two-phase model (2P) and four-phase model (4P) with a triangle; (iii) 2P and six-phase model (6P) with a square and; (iv) 4P and 6P with a circle.</p>
Full article ">
8250 KiB  
Article
Sea-Based Infrared Scene Interpretation by Background Type Classification and Coastal Region Detection for Small Target Detection
by Sungho Kim
Sensors 2015, 15(9), 24487-24513; https://doi.org/10.3390/s150924487 - 23 Sep 2015
Cited by 15 | Viewed by 10033
Abstract
Sea-based infrared search and track (IRST) is important for homeland security by detecting missiles and asymmetric boats. This paper proposes a novel scheme to interpret various infrared scenes by classifying the infrared background types and detecting the coastal regions in omni-directional images. The [...] Read more.
Sea-based infrared search and track (IRST) is important for homeland security by detecting missiles and asymmetric boats. This paper proposes a novel scheme to interpret various infrared scenes by classifying the infrared background types and detecting the coastal regions in omni-directional images. The background type or region-selective small infrared target detector should be deployed to maximize the detection rate and to minimize the number of false alarms. A spatial filter-based small target detector is suitable for identifying stationary incoming targets in remote sea areas with sky only. Many false detections can occur if there is an image sector containing a coastal region, due to ground clutter and the difficulty in finding true targets using the same spatial filter-based detector. A temporal filter-based detector was used to handle these problems. Therefore, the scene type and coastal region information is critical to the success of IRST in real-world applications. In this paper, the infrared scene type was determined using the relationships between the sensor line-of-sight (LOS) and a horizontal line in an image. The proposed coastal region detector can be activated if the background type of the probing sector is determined to be a coastal region. Coastal regions can be detected by fusing the region map and curve map. The experimental results on real infrared images highlight the feasibility of the proposed sea-based scene interpretation. In addition, the effects of the proposed scheme were analyzed further by applying region-adaptive small target detection. Full article
(This article belongs to the Special Issue Infrared and THz Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>Motivation for the background classification and coastal region detection for sea-based infrared search and track: different types of small infrared target detection methods should be applied depending on the background to maximize the detection rate and to minimize the false alarm rate.</p>
Full article ">Figure 2
<p>Comparison of spatial filter-based infrared target detection for different types of background: (<b>a</b>) sky-sea; (<b>b</b>) coast region. The yellow squares represent the detected target regions.</p>
Full article ">Figure 3
<p>Proposed infrared scene interpretation system.</p>
Full article ">Figure 4
<p>Proposed infrared scene interpretation system.</p>
Full article ">Figure 5
<p>Example of the prepared test DB for a quantitative inspection.</p>
Full article ">Figure 6
<p>Distribution of the thermal average intensity and standard deviation for each region.</p>
Full article ">Figure 7
<p>Distributions of the gray level co-occurrence matrix (GLCM)-based thermal texture for each region: (<b>a</b>) contrast <span class="html-italic">vs.</span> correlation; (<b>b</b>) contrast <span class="html-italic">vs.</span> homogeneity; (<b>c</b>) contrast <span class="html-italic">vs.</span> entropy; (<b>d</b>) correlation <span class="html-italic">vs.</span> homogeneity; (<b>e</b>) correlation <span class="html-italic">vs.</span> entropy; (<b>f</b>) homogeneity <span class="html-italic">vs.</span> entropy.</p>
Full article ">Figure 8
<p>Horizon observation of the sea-based infrared images and background types: (<b>a</b>) normal sky-sea; (<b>b</b>) remote coast; (<b>c</b>) near coast background.</p>
Full article ">Figure 9
<p>Proposed infrared scene classification method using the horizon and clutter density cues.</p>
Full article ">Figure 10
<p>Estimation of the horizon in image (<math display="inline"> <msub> <mi>H</mi> <mrow> <mi>i</mi> <mi>m</mi> <mi>g</mi> </mrow> </msub> </math>) using the projection method in the near coast region: (<b>a</b>) edge map-based measurement; (<b>b</b>) line map-based measurement.</p>
Full article ">Figure 11
<p>Examples of scene classification using the proposed background type classification.</p>
Full article ">Figure 12
<p>Limitations of the previous region segmentation approaches: (<b>a</b>) normalized graph cut (N-cut) with three prior regions; (<b>b</b>) mean-shift segmentation; (<b>c</b>) prior rectangle for snake; (<b>d</b>) after energy minimization in the snake algorithm.</p>
Full article ">Figure 13
<p>Proposed coast region detection flow given in the scene type information.</p>
Full article ">Figure 14
<p>Region map extraction using K-means clustering and small region removal.</p>
Full article ">Figure 15
<p>Results of the region map extraction: (<b>a</b>) test image; (<b>b</b>) original K-means clustering; (<b>c</b>) final region map.</p>
Full article ">Figure 16
<p>Curve map extraction using Canny edge detection, edge linking and short curve removal.</p>
Full article ">Figure 17
<p>Results of curve map extraction: (<b>a</b>) test image; (<b>b</b>) initial raw edge map by Canny edge detector; (<b>c</b>) contour extraction with a gap size of 2; (<b>d</b>) final curve map by removing the short curves.</p>
Full article ">Figure 18
<p>Fused map and coast boundary map generation flow using the selected region map and curve map: (<b>a</b>) selected region map using horizon information; (<b>b</b>) extracted curve map; (<b>c</b>) fused map generation by applying an AND operation to the selected region and curve map; (<b>d</b>) coast boundary representation.</p>
Full article ">Figure 19
<p>Region map selection flow using the geometric horizon and clutter density.</p>
Full article ">Figure 20
<p>Composition of the infrared database.</p>
Full article ">Figure 21
<p>Examples of successful infrared scene type classification.</p>
Full article ">Figure 22
<p>The effects of the parameter K in the K-means segmentation: (<b>a</b>) test image; (<b>b</b>) K = 3; (<b>c</b>) K = 5; (<b>d</b>) K = 7; (<b>e</b>) K = 9; and (<b>f</b>) K = 12.</p>
Full article ">Figure 23
<p>Comparison of the coastal region detection results for Test Set 1: (<b>a</b>) proposed coastal region detection; (<b>b</b>) mean-shift segmentation-based method [<a href="#B28-sensors-15-24487" class="html-bibr">28</a>]: Scene 1 was successful; Scenes 2 and 3 failed (segmentation results were displayed as partial outputs.) [<a href="#B28-sensors-15-24487" class="html-bibr">28</a>]; (<b>c</b>) statistical region merging method [<a href="#B35-sensors-15-24487" class="html-bibr">35</a>].</p>
Full article ">Figure 24
<p>Comparison of the coastal region detection results for Test Set 2: (<b>a</b>) proposed coastal region detection; (<b>b</b>) mean-shift segmentation [<a href="#B28-sensors-15-24487" class="html-bibr">28</a>]: it failed for these test scenes; (<b>c</b>) statistical region merging method [<a href="#B35-sensors-15-24487" class="html-bibr">35</a>].</p>
Full article ">Figure 25
<p>Generation of synthetic test images by inserting a 3D CAD model into the coast background image [<a href="#B36-sensors-15-24487" class="html-bibr">36</a>]: (<b>a</b>) 3D CAD model of a missile; (<b>b</b>) generated infrared image with the target motion.</p>
Full article ">Figure 26
<p>Effect of the coast region information in the infrared small target detection problem for the synthetic DB: (<b>a</b>) application of the temporal filter-based detector (TCF) for the identified coast region; (<b>b</b>) application of a spatial filter-based detector (top-hat). The yellow circles denote the ground truths, and the red rectangles represent the detection results.</p>
Full article ">Figure 27
<p>Final footprint of target detection using a temporal filter (TCF) in the identified coastal region. The blue dots represent the detected target locations.</p>
Full article ">Figure 28
<p>Effect of the coast region information in the infrared small target detection problem for the real WIG craft DB: (<b>a</b>) application of the temporal filter-based detector (TCF) for the identified coast region; (<b>b</b>) application of spatial filter-based detector (top-hat). The yellow circles denote the ground truths, and the red rectangles represent the detection results.</p>
Full article ">Figure 29
<p>Examples of a failure case: (<b>a</b>) dense cloud clutter around horizon and sky; (<b>b</b>) extracted region map; (<b>c</b>) extracted edge map; (<b>d</b>) coast detection using the proposed method.</p>
Full article ">
1804 KiB  
Article
Techniques for Updating Pedestrian Network Data Including Facilities and Obstructions Information for Transportation of Vulnerable People
by Seula Park, Yoonsik Bang and Kiyun Yu
Sensors 2015, 15(9), 24466-24486; https://doi.org/10.3390/s150924466 - 23 Sep 2015
Cited by 8 | Viewed by 4928
Abstract
Demand for a Pedestrian Navigation Service (PNS) is on the rise. To provide a PNS for the transportation of vulnerable people, more detailed information of pedestrian facilities and obstructions should be included in Pedestrian Network Data (PND) used for PNS. Such data can [...] Read more.
Demand for a Pedestrian Navigation Service (PNS) is on the rise. To provide a PNS for the transportation of vulnerable people, more detailed information of pedestrian facilities and obstructions should be included in Pedestrian Network Data (PND) used for PNS. Such data can be constructed efficiently by collecting GPS trajectories and integrating them with the existing PND. However, these two kinds of data have geometric differences and topological inconsistencies that need to be addressed. In this paper, we provide a methodology for integrating pedestrian facilities and obstructions information with an existing PND. At first we extracted the significant points from user-collected GPS trajectory by identifying the geometric difference index and attributes of each point. Then the extracted points were used to make an initial solution of the matching between the trajectory and the PND. Two geometrical algorithms were proposed and applied to reduce two kinds of errors in the matching: on dual lines and on intersections. Using the final solution for the matching, we reconstructed the node/link structure of PND including the facilities and obstructions information. Finally, performance was assessed with a test site and 79.2% of the collected data were correctly integrated with the PND. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Flowchart of the process to construct the pedestrian network data (PND) for transport of vulnerable people.</p>
Full article ">Figure 2
<p>Splitting polyline objects of PND links into the unit of segments.</p>
Full article ">Figure 3
<p>Evaluation of the distance (<span class="html-italic">D</span>) and the direction (<span class="html-italic">A</span>) conditions between GPS trajectory and PND.</p>
Full article ">Figure 4
<p>Error cases in initial solution (thin lines refer to existing PND, and thick and dotted lines refer to the initial solution of the significant points): (<b>a</b>) Example of errors on dual-lined segments; (<b>b</b>) Examples of errors near the intersections.</p>
Full article ">Figure 5
<p>Process for correcting errors on dual-lined segments: (<b>a</b>) Initial solution after the initial matching; (<b>b</b>) Segments not overlapped with PND are deleted; (<b>c</b>) Endpoints of the initial solutions are moved to start points of the next segments; (<b>d</b>) Repeat (<b>a</b>–<b>c</b>) until there is no more segment deleted.</p>
Full article ">Figure 6
<p>Process for correcting errors near the intersections: (<b>a</b>) Extract segments if their start or endpoints are near the intersections; (<b>b</b>) Extend the segment to the intersection; (<b>c</b>) Delete segments completely inside the buffer; (<b>d</b>) All segments are now extended to the intersection.</p>
Full article ">Figure 7
<p>Process for splitting PND link objects at the start or endpoints of the information sections.</p>
Full article ">Figure 8
<p>Process for extracting PND links of facilities or obstructions and transferring attributes from the significant points to the PND links.</p>
Full article ">Figure 9
<p>Results of field survey to collect GPS trajectories.</p>
Full article ">Figure 10
<p>The existing PND set of the test site.</p>
Full article ">Figure 11
<p>Error cases in initial solution(thin lines refer to existing PND, and thick lines refer to the initial solution of the significant points): (<b>a</b>) Example of errors on dual-lined segments; (<b>b</b>) Examples of errors near the intersections.</p>
Full article ">Figure 12
<p>Modified solution after resolving errors (thin lines refer to existing PND, and thick lines refer to the initial solution of the significant points): (<b>a</b>) Example of errors on dual-lined segments; (<b>b</b>) Examples of errors near the intersections.</p>
Full article ">Figure 13
<p>New PND of the test site integrated with the facilities and obstructions obtained by GPS trajectory of users.</p>
Full article ">Figure 14
<p>Examples of omission occurred because: (<b>a</b>) information was recorded where the PND did not have any link to be matched; (<b>b</b>) the boundary of the PND was not exactly coincident with that of the GPS data.</p>
Full article ">
214 KiB  
Conference Report
4th International Symposium on Sensor Science (I3S2015): Conference Report
by Peter Seitz, Debbie G. Senesky, Michael J. Schöning, Peter C. Hauser, Roland Moser, Hans Peter Herzig, Assefa M. Melesse, Patricia A. Broderick and Patrick Thomas Eugster
Sensors 2015, 15(9), 24458-24465; https://doi.org/10.3390/s150924458 - 23 Sep 2015
Viewed by 5986
Abstract
An international scientific conference was sponsored by the journal Sensors under the patronage of the University of Basel. The 4th edition of the International Symposium on Sensor Science (I3S2015) ran from 13 to 15 July 2015 in Basel, Switzerland. It comprised five plenary [...] Read more.
An international scientific conference was sponsored by the journal Sensors under the patronage of the University of Basel. The 4th edition of the International Symposium on Sensor Science (I3S2015) ran from 13 to 15 July 2015 in Basel, Switzerland. It comprised five plenary sessions and one morning with three parallel sessions. The conference covered the most exciting aspects and the latest developments in sensor science. The conference dinner took place on the second evening of the conference. The I3S2015 brought together 170 participants from 40 different countries. [...] Full article
(This article belongs to the Special Issue I3S 2015 Selected Papers)
1253 KiB  
Article
Using LDR as Sensing Element for an External Fuzzy Controller Applied in Photovoltaic Pumping Systems with Variable-Speed Drives
by Geraldo Neves De A. Maranhão, Alaan Ubaiara Brito, Anderson Marques Leal, Jéssica Kelly Silva Fonseca and Wilson Negrão Macêdo
Sensors 2015, 15(9), 24445-24457; https://doi.org/10.3390/s150924445 - 22 Sep 2015
Cited by 8 | Viewed by 7843
Abstract
In the present paper, a fuzzy controller applied to a Variable-Speed Drive (VSD) for use in Photovoltaic Pumping Systems (PVPS) is proposed. The fuzzy logic system (FLS) used is embedded in a microcontroller and corresponds to a proportional-derivative controller. A Light-Dependent Resistor (LDR) [...] Read more.
In the present paper, a fuzzy controller applied to a Variable-Speed Drive (VSD) for use in Photovoltaic Pumping Systems (PVPS) is proposed. The fuzzy logic system (FLS) used is embedded in a microcontroller and corresponds to a proportional-derivative controller. A Light-Dependent Resistor (LDR) is used to measure, approximately, the irradiance incident on the PV array. Experimental tests are executed using an Arduino board. The experimental results show that the fuzzy controller is capable of operating the system continuously throughout the day and controlling the direct current (DC) voltage level in the VSD with a good performance. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) Photovoltaic Pumping Systems (PVPS) Block Diagram with External Control; (<b>b</b>) Circuit using Light-dependent Resistor (LDR) connected to the fuzzy controller.</p>
Full article ">Figure 2
<p>LDR series circuit response and irradiance during the day time, (<b>a</b>) sunny day and (<b>b</b>) cloudy day.</p>
Full article ">Figure 3
<p>LDR normalized response, (<b>a</b>) sunny day and (<b>b</b>) cloudy day.</p>
Full article ">Figure 4
<p>(<b>a</b>) The spectral response of amorphous silicon and the LDR (<b>b</b>) solar radiation spectrum.</p>
Full article ">Figure 5
<p>Fuzzy sets input.</p>
Full article ">Figure 6
<p>Input signal fuzzy controller (<b>a</b>) LDR series circuit response and (<b>b</b>) Variation of the LDR series circuit response.</p>
Full article ">Figure 7
<p>Detailed behavior of the parameter G.</p>
Full article ">Figure 8
<p>Fuzzy controller performance regarding the DC bus voltage behavior.</p>
Full article ">Figure 9
<p>Variation in DC voltage in a PVPS with VSD containing an embedded PID.</p>
Full article ">
827 KiB  
Article
Optimizing the De-Noise Neural Network Model for GPS Time-Series Monitoring of Structures
by Mosbeh R. Kaloop and Jong Wan Hu
Sensors 2015, 15(9), 24428-24444; https://doi.org/10.3390/s150924428 - 22 Sep 2015
Cited by 13 | Viewed by 6756
Abstract
The Global Positioning System (GPS) is recently used widely in structures and other applications. Notwithstanding, the GPS accuracy still suffers from the errors afflicting the measurements, particularly the short-period displacement of structural components. Previously, the multi filter method is utilized to remove the [...] Read more.
The Global Positioning System (GPS) is recently used widely in structures and other applications. Notwithstanding, the GPS accuracy still suffers from the errors afflicting the measurements, particularly the short-period displacement of structural components. Previously, the multi filter method is utilized to remove the displacement errors. This paper aims at using a novel application for the neural network prediction models to improve the GPS monitoring time series data. Four prediction models for the learning algorithms are applied and used with neural network solutions: back-propagation, Cascade-forward back-propagation, adaptive filter and extended Kalman filter, to estimate which model can be recommended. The noise simulation and bridge’s short-period GPS of the monitoring displacement component of one Hz sampling frequency are used to validate the four models and the previous method. The results show that the Adaptive neural networks filter is suggested for de-noising the observations, specifically for the GPS displacement components of structures. Also, this model is expected to have significant influence on the design of structures in the low frequency responses and measurements’ contents. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>Design nural networks (NN) stochastic model identification system.</p>
Full article ">Figure 2
<p>Back-propagation neural networks architecture.</p>
Full article ">Figure 3
<p>Cascade- forward Back-propagation neural networks architecture.</p>
Full article ">Figure 4
<p>Adaptive filter neural networks architecture.</p>
Full article ">Figure 5
<p>Comparison of the simulation results (<b>a</b>) Back-Propagation Neural Network (BPN); (<b>b</b>) Cascade- Forward Back-Propagation Neural Network (CFN); (<b>c</b>) Adaptive Filter Neural Network (ADFN); (d) Extended Kalman Filter Neural Network (EKFN).</p>
Full article ">Figure 6
<p>Global Positioning System (GPS) network diagram.</p>
Full article ">Figure 7
<p>General view of the bridge and selection of the monitoring study point.</p>
Full article ">Figure 8
<p>Dynamic performance for z direction. (<b>a</b>) Long period displacement; (<b>b</b>) Short period displacement; (<b>c</b>) Multi filter method (MFM) de-noise frequency contents; (<b>d</b>) ADFN de-noise frequency contents.</p>
Full article ">Figure 8 Cont.
<p>Dynamic performance for z direction. (<b>a</b>) Long period displacement; (<b>b</b>) Short period displacement; (<b>c</b>) Multi filter method (MFM) de-noise frequency contents; (<b>d</b>) ADFN de-noise frequency contents.</p>
Full article ">Figure 9
<p>Long (left) and Short (right) period displacement for (<b>a</b>) x, (<b>b</b>) y directions.</p>
Full article ">Figure 10
<p>Frequencies of short period before and after de-noise GPS monitoring point (<b>a</b>) x, (<b>b</b>) y.</p>
Full article ">
1293 KiB  
Article
Toward Epileptic Brain Region Detection Based on Magnetic Nanoparticle Patterning
by Maysam Z. Pedram, Amir Shamloo, Aria Alasty and Ebrahim Ghafar-Zadeh
Sensors 2015, 15(9), 24409-24427; https://doi.org/10.3390/s150924409 - 22 Sep 2015
Cited by 18 | Viewed by 6062
Abstract
Resection of the epilepsy foci is the best treatment for more than 15% of epileptic patients or 50% of patients who are refractory to all forms of medical treatment. Accurate mapping of the locations of epileptic neuronal networks can result in the complete [...] Read more.
Resection of the epilepsy foci is the best treatment for more than 15% of epileptic patients or 50% of patients who are refractory to all forms of medical treatment. Accurate mapping of the locations of epileptic neuronal networks can result in the complete resection of epileptic foci. Even though currently electroencephalography is the best technique for mapping the epileptic focus, it cannot define the boundary of epilepsy that accurately. Herein we put forward a new accurate brain mapping technique using superparamagnetic nanoparticles (SPMNs). The main hypothesis in this new approach is the creation of super-paramagnetic aggregates in the epileptic foci due to high electrical and magnetic activities. These aggregates may improve tissue contrast of magnetic resonance imaging (MRI) that results in improving the resection of epileptic foci. In this paper, we present the mathematical models before discussing the simulation results. Furthermore, we mimic the aggregation of SPMNs in a weak magnetic field using a low-cost microfabricated device. Based on these results, the SPMNs may play a crucial role in diagnostic epilepsy and the subsequent treatment of this disease. Full article
(This article belongs to the Special Issue Inorganic Nanoparticles as Biomedical Probes)
Show Figures

Figure 1

Figure 1
<p>Illustration of SPMN’s aggregation in epileptic Zone.</p>
Full article ">Figure 2
<p>Numerically analyzed potential energy (units in axis x-y are <span class="html-italic">m</span>, unit in z-axis is <span class="html-italic">J</span>).</p>
Full article ">Figure 3
<p>Trajectory of the nine nanoparticles in the effect of one magnetic field source (units are mm).</p>
Full article ">Figure 4
<p>Trajectory of ten nanoparticles in the effect of one magnetic field source (units are mm).</p>
Full article ">Figure 5
<p>Trajectory of seventy nanoparticles under one magnetic field source (units are mm).</p>
Full article ">Figure 6
<p>Trajectory of fifteen nanoparticles under three magnetic field sources (units are mm).</p>
Full article ">Figure 7
<p>Trajectory of 100 nanoparticles under ten different magnetic field sources (Units of X and Y axes are mm).</p>
Full article ">Figure 8
<p>Trajectory of 100 nanoparticles under ten different magnetic field sources (Units of X and Y axes are mm).</p>
Full article ">Figure 9
<p>COMSOL simulation of the magnetic field above the micro coil; Gradient of the magnetic field (<b>a</b>) from the top and (<b>b</b>) close to the conductor.</p>
Full article ">Figure 10
<p>Experimental results: aggregation of nanoparticles above the microcoil. Generating a magnetic field (<b>a</b>) before applying an electromagnetic field; (<b>b</b>) immediately after applying electromagnetic field and (<b>c</b>) 10 s after applying electromagnetic field.</p>
Full article ">Figure 10 Cont.
<p>Experimental results: aggregation of nanoparticles above the microcoil. Generating a magnetic field (<b>a</b>) before applying an electromagnetic field; (<b>b</b>) immediately after applying electromagnetic field and (<b>c</b>) 10 s after applying electromagnetic field.</p>
Full article ">Figure 11
<p>Schematic of 2D analysis of motion of nanoparticles (Nanoparticle movement is considered in <span class="html-italic">y</span> and <span class="html-italic">z</span> plane).</p>
Full article ">Figure 12
<p>Coordination of single wire in three-dimensional space.</p>
Full article ">
537 KiB  
Article
Configuration Analysis of the ERS Points in Large-Volume Metrology System
by Zhangjun Jin, Cijun Yu, Jiangxiong Li and Yinglin Ke
Sensors 2015, 15(9), 24397-24408; https://doi.org/10.3390/s150924397 - 22 Sep 2015
Cited by 34 | Viewed by 6366
Abstract
In aircraft assembly, multiple laser trackers are used simultaneously to measure large-scale aircraft components. To combine the independent measurements, the transformation matrices between the laser trackers’ coordinate systems and the assembly coordinate system are calculated, by measuring the enhanced referring system (ERS) points. [...] Read more.
In aircraft assembly, multiple laser trackers are used simultaneously to measure large-scale aircraft components. To combine the independent measurements, the transformation matrices between the laser trackers’ coordinate systems and the assembly coordinate system are calculated, by measuring the enhanced referring system (ERS) points. This article aims to understand the influence of the configuration of the ERS points that affect the transformation matrix errors, and then optimize the deployment of the ERS points to reduce the transformation matrix errors. To optimize the deployment of the ERS points, an explicit model is derived to estimate the transformation matrix errors. The estimation model is verified by the experiment implemented in the factory floor. Based on the proposed model, a group of sensitivity coefficients are derived to evaluate the quality of the configuration of the ERS points, and then several typical configurations of the ERS points are analyzed in detail with the sensitivity coefficients. Finally general guidance is established to instruct the deployment of the ERS points in the aspects of the layout, the volume size and the number of the ERS points, as well as the position and orientation of the assembly coordinate system. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>The measurement principle in large-scale aircraft assembly systems.</p>
Full article ">Figure 2
<p>The overall layout of the experiment.</p>
Full article ">Figure 3
<p>The relative estimation errors of the uncertainties of transformation parameter errors. (<b>a</b>) The estimation errors of the uncertainties of rotation parameter errors; (<b>b</b>) The estimation errors of the uncertainties of translation parameter errors.</p>
Full article ">Figure 4
<p>The geometric layouts of the ERS points. (<b>a</b>) Triangular prism layout; (<b>b</b>) Double rectangular pyramid layout; (<b>c</b>) Cube layout; (<b>d</b>) Cuboid layout; (<b>e</b>) Double triangular pyramid layout; (<b>f</b>) Rectangular pyramid layout.</p>
Full article ">Figure 5
<p>The sensitivity coefficients of rotation parameter errors versus volume size. (<b>a</b>) Sensitivity coefficients of layout a; (<b>b</b>) Sensitivity coefficients of layout b; (<b>c</b>) Sensitivity coefficients of layout c; (<b>d</b>) Sensitivity coefficients of layout d; (<b>e</b>) Sensitivity coefficients of layout e; (<b>f</b>) Sensitivity coefficients of layout f.</p>
Full article ">Figure 6
<p>The sensitivity coefficients of transformation parameter errors versus ERS point number. (<b>a</b>) Sensitivity coefficients of layout a; (<b>b</b>) Sensitivity coefficients of layout b; (<b>c</b>) Sensitivity coefficients of layout c; (<b>d</b>) Sensitivity coefficients of layout d; (<b>e</b>) Sensitivity coefficients of layout e; (<b>f</b>) Sensitivity coefficients of layout f.</p>
Full article ">Figure 7
<p>The fluctuation of sensitivity coefficients of rotation parameter errors. (<b>a</b>) Sensitivity coefficients of layout a; (<b>b</b>) Sensitivity coefficients of layout b; (<b>c</b>) Sensitivity coefficients of layout c; (<b>d</b>) Sensitivity coefficients of layout d; (<b>e</b>) Sensitivity coefficients of layout e; (<b>f</b>) Sensitivity coefficients of layout f.</p>
Full article ">
1476 KiB  
Review
Recent Progress in Fluorescent Imaging Probes
by Yen Leng Pak, K. M. K. Swamy and Juyoung Yoon
Sensors 2015, 15(9), 24374-24396; https://doi.org/10.3390/s150924374 - 22 Sep 2015
Cited by 104 | Viewed by 12674
Abstract
Due to the simplicity and low detection limit, especially the bioimaging ability for cells, fluorescence probes serve as unique detection methods. With the aid of molecular recognition and specific organic reactions, research on fluorescent imaging probes has blossomed during the last decade. Especially, [...] Read more.
Due to the simplicity and low detection limit, especially the bioimaging ability for cells, fluorescence probes serve as unique detection methods. With the aid of molecular recognition and specific organic reactions, research on fluorescent imaging probes has blossomed during the last decade. Especially, reaction based fluorescent probes have been proven to be highly selective for specific analytes. This review highlights our recent progress on fluorescent imaging probes for biologically important species, such as biothiols, reactive oxygen species, reactive nitrogen species, metal ions including Zn2+, Hg2+, Cu2+ and Au3+, and anions including cyanide and adenosine triphosphate (ATP). Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Reaction scheme for biothiol selective probe <b>1</b>.</p>
Full article ">Figure 2
<p>Comparison between CyAE (<b>2</b>) and CyAK (<b>2′</b>) and reaction scheme for Cys selective probe <b>3</b> (CyAC).</p>
Full article ">Figure 3
<p>Structures of biothiol probes <b>4</b>–<b>6</b>.</p>
Full article ">Figure 4
<p>Reaction scheme for biothiol selective probe <b>7</b> and GSH selective probe <b>8</b>.</p>
Full article ">Figure 5
<p><span class="html-italic">In vivo</span> images of a mouse injected with probe <b>8</b> (50 µM) or NMM (20 mM) intravenously for 20 min. Fluorescence images of: (<b>A</b>) the mouse not injected with probe <b>8</b> (No injection); (<b>B</b>) the mouse injected with NMM (NMM only); (<b>C</b>) the mouse injected with probe <b>8</b> (<b>8</b> only); (<b>D</b>) the mouse injected with probe <b>8</b> after pre-injection with NMM. (Reprinted from reference [<a href="#B27-sensors-15-24374" class="html-bibr">27</a>]).</p>
Full article ">Figure 6
<p>Structure of probe <b>10</b> and its reaction scheme with Cys and Hcy.</p>
Full article ">Figure 7
<p>Structures of HOCl selective fluorescent probes <b>11</b>–<b>13</b> and reaction scheme for <b>11</b> with HOCl.</p>
Full article ">Figure 8
<p>Reaction scheme for HOCl selective fluorescent probe <b>14</b> (<b>FBS</b>).</p>
Full article ">Figure 9
<p>Structures of <b>15</b> (<b>PIS</b>) and <b>16</b> (<b>NIS</b>) and reaction scheme of <b>15</b> (<b>PIS</b>) with HOCl.</p>
Full article ">Figure 10
<p>TPM images of (<b>a</b>–<b>e</b>) <b>PIS</b> and (<b>f</b>) <b>16</b> (10 μM, ρ<sub>DMF</sub> = 0.5%) labeled RAW 264.7 cells. (<b>a</b>) Control image; (<b>b</b>) Cells pretreated with NaOCl (200 μM) for 30 min and then incubated with <b>PIS</b>; (<b>c</b>) Cells pretreated with LPS (100 ng/mL) for 16 h, IFN-γ (400 U/mL) for 4 h, and PMA (10 nM) for 30 min and then with <b>PIS</b>; (<b>d</b>) Cells pretreated with LPS, IFN-γ, and 4-ABAH (50 μM) for 4 h and then incubated with <b>PIS</b>; (<b>e</b>) Cells pretreated with LPS, IFN-γ, and FAA (50 μM) for 4 h and then with <b>PIS</b>; (<b>g</b>) Average TPEF intensities in (<b>a</b>–<b>f</b>), <span class="html-italic">n</span> = 5. Scale bar: 20 μm. (Reprinted from reference [<a href="#B32-sensors-15-24374" class="html-bibr">32</a>]).</p>
Full article ">Figure 11
<p>Reaction scheme for probe <b>17</b> with ONOO<sup>−</sup>.</p>
Full article ">Figure 12
<p>Reaction scheme of peroxinitrite probe <b>18</b>.</p>
Full article ">Figure 13
<p>Confocal ratiometric fluorescence images of RAW 264.7 cells for endogenous ONOO<sup>−</sup> during phagocytic immune response. The cells were stained with 5 μM <b>18</b> (<b>CHCN</b>) for 30 min and then washed with DPBS before imaging. (<b>a</b>) Control; (<b>e</b>) lipopolysaccharides (LPS) (1 μg/mL) for 16 h, interferon-γ (50 ng/mL) for 4 h, PMA (10 nM) for 30 min; (<b>i</b>) LPS (1 μg/mL) for 16 h, interferon-γ (50 ng/mL) for 4 h, PMA (10 nM) for 30 min, and then AG (1 mM) for 16 h; (<b>m</b>) LPS (1 μg/mL) or 16 h, interferon-γ (50 ng/mL) for 4 h, PMA (10 nM) for 30 min, and then TEMPO (100 μM) for 16 h. The green channel (<b>a</b>,<b>e</b>,<b>I</b>,<b>m</b>) represents the fluorescence obtained at 490–540 nm with an excitation wavelength at 473 nm, the red channel (<b>b</b>,<b>f</b>,<b>j</b>,<b>n</b>) represents the fluorescence obtained at 575–675 nm with an excitation wavelength at 559 nm, images (<b>c</b>,<b>g</b>,<b>k</b>,<b>o</b>) represent DIC channels (differential interference contrast), and images (<b>d</b>,<b>h</b>,<b>I</b>,<b>p</b>) represent merged images of red and green channels, respectively. (Reprinted from reference [<a href="#B34-sensors-15-24374" class="html-bibr">34</a>]).</p>
Full article ">Figure 14
<p>Design strategy of H<sub>2</sub>O<sub>2</sub> selective probe <b>19</b> and its reaction scheme with H<sub>2</sub>O<sub>2</sub>.</p>
Full article ">Figure 15
<p>Proposed binding mode of probe <b>20</b> with Zn<sup>2+</sup>.</p>
Full article ">Figure 16
<p>Proposed binding modes of probe <b>21</b> with metal ions.</p>
Full article ">Figure 17
<p>Zebrafish incubated with probe <b>21</b> (5 μM). (<b>a</b>) Images of 19 h-old; (<b>b</b>) 36 h-old; and (<b>c</b>) 48 h-old zebrafish incubated with <b>21</b> for 1 h; (<b>d</b>) Image of 54 h-old zebrafish incubated with <b>21</b> for 1 h and (<b>e</b>) image of 54 h-old zebrafish after initial incubation with 100 μM TPEN for 1 h and subsequent treatment of washed zebrafish with <b>21</b> for 1 h. (Reprinted from reference [<a href="#B38-sensors-15-24374" class="html-bibr">38</a>]).</p>
Full article ">Figure 18
<p>Proposed binding mode of probe <b>22</b> with Zn<sup>2+</sup>.</p>
Full article ">Figure 19
<p>Fluorescent detection of intact zinc ions in zebra fish using probe <b>22</b> (<b>a</b>) 24 h; (<b>b</b>) 36 h; (<b>c</b>) 48 h; (<b>d</b>) 72 h and (<b>e</b>) 96 h-old zebra fish incubated with probe <b>22</b> for 1 h. (Reprinted from reference [<a href="#B39-sensors-15-24374" class="html-bibr">39</a>]).</p>
Full article ">Figure 20
<p>Structures of Hg<sup>2+</sup> selective probes <b>23</b>–<b>25</b>.</p>
Full article ">Figure 21
<p>Proposed reaction scheme for probe <b>26</b> with Hg<sup>2+</sup> and CH<sub>3</sub>Hg<sup>+</sup>.</p>
Full article ">Figure 22
<p>Images of zebrafish organs treated with 20 µM <b>26</b> (0.2% DMSO) and 500 nM HgCl<sub>2</sub> or 500 nM CH<sub>3</sub>HgCl. (<b>a</b>) Images of zebrafish organs treated with <b>26</b> in the absence of HgCl<sub>2</sub> or CH<sub>3</sub>HgCl; (<b>b</b>) presence of 500 nM HgCl<sub>2</sub> or (<b>c</b>) presence of 500 nM CH<sub>3</sub>HgCl (upper, merged images; lower, fluorescence images). (Reprinted from reference [<a href="#B47-sensors-15-24374" class="html-bibr">47</a>]).</p>
Full article ">Figure 23
<p>Proposed binding mode of probe <b>27</b> with Cu<sup>2+</sup>.</p>
Full article ">Figure 24
<p>Proposed reaction scheme of probe <b>28</b> with Au<sup>3+</sup>.</p>
Full article ">Figure 25
<p>Structure of the probe <b>30</b>.</p>
Full article ">Figure 26
<p>(<b>a</b>) Response rates of <b>30</b> to Au<sup>3+</sup> in HeLa cells and differentiated adipocytes. HeLa cells and adipocytes were incubated with 100 μM AuCl<sub>3</sub> for 2 h and then with <b>30</b> (20 μM) for 2, 4, 6, and 8 h (■: HeLa cells, ▲: adipocytes); (<b>b</b>) Time-dependent fluorescence images of HeLa cells and adipocytes treated with <b>30</b> and Au<sup>3+</sup> (scale bar = 20 μm). (Reprinted from reference [<a href="#B51-sensors-15-24374" class="html-bibr">51</a>]).</p>
Full article ">Figure 27
<p>Structure of cyanide selective probe <b>31</b> and <b>31-Cu<sup>2+</sup></b> complex.</p>
Full article ">Figure 28
<p>NIR imaging of cyanide in <span class="html-italic">C. elegans</span> infected with a <span class="html-italic">P. aeruginosa</span> strain (PA14) labeled with green fluorescent protein (GFP). Before the imaging, the nematodes fed on either non-infectious <span class="html-italic">E. coli</span> OP50 (<b>a</b>) or GFP-labeled PA14 for 2 days (<b>b</b>–<b>d</b>); (<b>b</b>) the anterior end; (<b>c</b>) the medial part; (<b>d</b>) the posterior end of <span class="html-italic">C. elegans</span>. The scale bars represent 20 μm. (IL = intestinal lumen; I = intestine; E = eggs; PA = PA14-GFP; A = anus). (Reprinted from reference [<a href="#B54-sensors-15-24374" class="html-bibr">54</a>]).</p>
Full article ">Figure 29
<p>Visualization of antibiotic efficacy against <span class="html-italic">P. aeruginosa</span> infection in <span class="html-italic">C. elegans</span> with the NIR sensor. The nematodes fed on GFP-labeled <span class="html-italic">P. aerugionosa</span> (PA14) for two days. They were then incubated with ceftazidime (200 μg/mL) for 2 h before the <span class="html-italic">in vivo</span> imaging. The scale bars represent 20 μm. (Reprinted from reference [<a href="#B53-sensors-15-24374" class="html-bibr">53</a>]).</p>
Full article ">Figure 30
<p>Proposed binding modes of probe <b>32</b> with ATP and GTP.</p>
Full article ">
1104 KiB  
Article
Implementation and Evaluation of Four Interoperable Open Standards for the Internet of Things
by Mohammad Ali Jazayeri, Steve H. L. Liang and Chih-Yuan Huang
Sensors 2015, 15(9), 24343-24373; https://doi.org/10.3390/s150924343 - 22 Sep 2015
Cited by 35 | Viewed by 9369
Abstract
Recently, researchers are focusing on a new use of the Internet called the Internet of Things (IoT), in which enabled electronic devices can be remotely accessed over the Internet. As the realization of IoT concept is still in its early stages, manufacturers of [...] Read more.
Recently, researchers are focusing on a new use of the Internet called the Internet of Things (IoT), in which enabled electronic devices can be remotely accessed over the Internet. As the realization of IoT concept is still in its early stages, manufacturers of Internet-connected devices and IoT web service providers are defining their proprietary protocols based on their targeted applications. Consequently, IoT becomes heterogeneous in terms of hardware capabilities and communication protocols. Addressing these heterogeneities by following open standards is a necessary step to communicate with various IoT devices. In this research, we assess the feasibility of applying existing open standards on resource-constrained IoT devices. The standard protocols developed in this research are OGC PUCK over Bluetooth, TinySOS, SOS over CoAP, and OGC SensorThings API. We believe that by hosting open standard protocols on IoT devices, not only do the devices become self-describable, self-contained, and interoperable, but innovative applications can also be easily developed with standardized interfaces. In addition, we use memory consumption, request message size, response message size, and response latency to benchmark the efficiency of the implemented protocols. In all, this research presents and evaluates standard-based solutions to better understand the feasibility of applying existing standards to the IoT vision. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Technical overview of the IoT (Source: Recommendation ITU-T Y.2060—Overview of the Internet of Things [<a href="#B1-sensors-15-24343" class="html-bibr">1</a>]).</p>
Full article ">Figure 2
<p>The overall workflow for accessing the sensor measurements.</p>
Full article ">Figure 3
<p>Procedures of the sensor protocol.</p>
Full article ">Figure 4
<p>The system architecture supporting the PUCK protocol.</p>
Full article ">Figure 5
<p>The system architecture supporting the TinySOS protocol [<a href="#B18-sensors-15-24343" class="html-bibr">18</a>].</p>
Full article ">Figure 6
<p>High level view of the SOS over CoAP strategy.</p>
Full article ">Figure 7
<p>The device architecture supporting the CoAP protocol.</p>
Full article ">Figure 8
<p>The architecture of the SOSCoAP Proxy.</p>
Full article ">Figure 9
<p>Ecosystem of the OGC SensorThings API.</p>
Full article ">Figure 10
<p>The device architecture supporting the OGC SensorThings API.</p>
Full article ">Figure 11
<p>Request size evaluation for the get observation request.</p>
Full article ">Figure 12
<p>Response size <span class="html-italic">vs.</span> the number of sensor readings.</p>
Full article ">Figure 13
<p>Response size <span class="html-italic">vs.</span> the number of sensor readings (removed the OGC SensorThings trend).</p>
Full article ">Figure 14
<p>Response latency <span class="html-italic">vs.</span> the number of sensor readings.</p>
Full article ">Figure 15
<p>Response latency <span class="html-italic">vs.</span> the number of sensor readings (removed TinySOS).</p>
Full article ">Figure 16
<p>HTTP GET request and response to the simple web server through the Internet.</p>
Full article ">Figure 17
<p>GETREADING request and response to the PUCK-enabled Netduino Plus through Bluetooth.</p>
Full article ">Figure 18
<p>GetObservation request to TinySOS using the 52 North test client tool.</p>
Full article ">Figure 19
<p>GetObservation request to the SOSCoAP proxy using 52 North test client tool.</p>
Full article ">Figure 20
<p>GetObservation request to a CoAP server (<span class="html-italic">i.e.</span>, Netduino Plus) using the SOSCoAP Proxy.</p>
Full article ">Figure 21
<p>HTTP GET request and response to the OGC SensorThings through the Internet.</p>
Full article ">Figure 22
<p>Summarized response of the SensorThings.</p>
Full article ">Figure 23
<p>Response of the SensorThings to multiple readings request.</p>
Full article ">
1630 KiB  
Article
Measurement and Evaluation of the Gas Density and Viscosity of Pure Gases and Mixtures Using a Micro-Cantilever Beam
by Anastasios Badarlis, Axel Pfau and Anestis Kalfas
Sensors 2015, 15(9), 24318-24342; https://doi.org/10.3390/s150924318 - 22 Sep 2015
Cited by 23 | Viewed by 8437
Abstract
Measurement of gas density and viscosity was conducted using a micro-cantilever beam. In parallel, the validity of the proposed modeling approach was evaluated. This study also aimed to widen the database of the gases on which the model development of the micro-cantilever beams [...] Read more.
Measurement of gas density and viscosity was conducted using a micro-cantilever beam. In parallel, the validity of the proposed modeling approach was evaluated. This study also aimed to widen the database of the gases on which the model development of the micro-cantilever beams is based. The density and viscosity of gases are orders of magnitude lower than liquids. For this reason, the use of a very sensitive sensor is essential. In this study, a micro-cantilever beam from the field of atomic force microscopy was used. Although the current cantilever was designed to work with thermal activation, in the current investigation, it was activated with an electromagnetic force. The deflection of the cantilever beam was detected by an integrated piezo-resistive sensor. Six pure gases and sixteen mixtures of them in ambient conditions were investigated. The outcome of the investigation showed that the current cantilever beam had a sensitivity of 240 Hz/(kg/m3), while the accuracy of the determined gas density and viscosity in ambient conditions reached ±1.5% and ±2.0%, respectively. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Micro-cantilever beam. At the free end, the AFM tip is barely distinguishable, while the imprinted aluminum conductor can be seen clearly. The imprinted conductor has a meandering form, due to the fact that the sensor was primarily designed to work with thermal excitation.</p>
Full article ">Figure 2
<p>Schematic diagram of the experimental facility. The composition selection was carried out by controlling the flow through the mass flow controller; then, the two gases were mixed, and finally, the mixture was guided into the gas cell. During the measurement, there were no flow conditions in the gas cell.</p>
Full article ">Figure 3
<p>Schematic diagram of the experimental setup.</p>
Full article ">Figure 4
<p>Gas cell photo. The micro-cantilever beam is mounted on the cantilever chip, which is mounted on the PCB. In addition, the magnet, the pressure, the temperature and the humidity sensors are displayed.</p>
Full article ">Figure 5
<p>Cantilever beam and its simple harmonic oscillator representation, a system with mass, spring and damping. The external excitation is the Lorentz force on the imprinted conductor.</p>
Full article ">Figure 6
<p>Quality factor deviation between the gradient method and the Lorentzian fit method.</p>
Full article ">Figure 7
<p>Calibration and measurement flowcharts.</p>
Full article ">Figure 8
<p>Amplitude of the micro-cantilever response in a spectrum. The first three flexural modes are clearly distinguishable.</p>
Full article ">Figure 9
<p>Phase shift of the micro-cantilever response.</p>
Full article ">Figure 10
<p>Sensor linearity.</p>
Full article ">Figure 11
<p>Repeatability investigations of the cantilever. The deviation between the curves was less than 0.07%, while the standard deviation (error bars) for each measurement point did not exceed 0.005%.</p>
Full article ">Figure 12
<p>Frequency spectrum for different gases.</p>
Full article ">Figure 13
<p>Resonance against density for different gases.</p>
Full article ">Figure 14
<p>Quality factor against <math display="inline"> <msqrt> <mrow> <mi>η</mi> <mi>ρ</mi> </mrow> </msqrt> </math> (kg s<math display="inline"> <msup> <mrow/> <mrow> <mo>-</mo> <mn>0</mn> <mo>.</mo> <mn>5</mn> </mrow> </msup> </math>/m<math display="inline"> <msup> <mrow/> <mn>2</mn> </msup> </math>).</p>
Full article ">Figure 15
<p>Resonance frequency experiments <span class="html-italic">versus</span> the model. The color of the points represents the measured quality factor.</p>
Full article ">Figure 16
<p>Quality factor experiments against the model. The color of the points represents the measured resonance frequency.</p>
Full article ">Figure 17
<p>Deviation of the determined density from the nominal value. Calibration of the sensor in four gases (cross symbol). The color of the points represents the dynamic viscosity.</p>
Full article ">Figure 18
<p>Deviation of the determined dynamic viscosity from the nominal value. Calibration of the sensor in four gases (cross symbol). The color of the points represents the density.</p>
Full article ">
2311 KiB  
Article
Leveraging Two Kinect Sensors for Accurate Full-Body Motion Capture
by Zhiquan Gao, Yao Yu, Yu Zhou and Sidan Du
Sensors 2015, 15(9), 24297-24317; https://doi.org/10.3390/s150924297 - 22 Sep 2015
Cited by 38 | Viewed by 8839
Abstract
Accurate motion capture plays an important role in sports analysis, the medical field and virtual reality. Current methods for motion capture often suffer from occlusions, which limits the accuracy of their pose estimation. In this paper, we propose a complete system to measure [...] Read more.
Accurate motion capture plays an important role in sports analysis, the medical field and virtual reality. Current methods for motion capture often suffer from occlusions, which limits the accuracy of their pose estimation. In this paper, we propose a complete system to measure the pose parameters of the human body accurately. Different from previous monocular depth camera systems, we leverage two Kinect sensors to acquire more information about human movements, which ensures that we can still get an accurate estimation even when significant occlusion occurs. Because human motion is temporally constant, we adopt a learning analysis to mine the temporal information across the posture variations. Using this information, we estimate human pose parameters accurately, regardless of rapid movement. Our experimental results show that our system can perform an accurate pose estimation of the human body with the constraint of information from the temporal domain. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Setup of our system; (<b>b</b>) data acquisition: RGB image (top) and depth data (bottom) from two Kinect sensors; (<b>c</b>) template SCAPEmodel; (<b>d</b>) reconstructed SCAPE model (top) and projection (bottom); the yellow shadows are the projections of our reconstructed model.</p>
Full article ">Figure 2
<p>(<b>a</b>) SCAPE model; (<b>b</b>) scanned point cloud; (<b>c</b>) template SCAPE model.</p>
Full article ">Figure 3
<p>Registration results: (<b>a</b>) registration results using our calibration method; (<b>b</b>) registration results using stereo calibration in the calibration toolbox of MATLAB.</p>
Full article ">Figure 4
<p>Impact of our temporal term: the yellow shadow on these four pictures is the projection of our model transformed with the estimated pose parameters. (<b>a</b>) projection of the reconstructed model for the left camera without our temporal term; (<b>b</b>) projection of our reconstructed model for the left camera with our temporal term; (<b>c</b>) projection of the reconstructed model for the left camera without our temporal term; (<b>d</b>) projection of our reconstructed model for the left camera with our temporal term.</p>
Full article ">Figure 5
<p>Results of the motion capture: Column 1 shows the RGB images from the left camera (for brevity, we do not show the RGB images from the right camera). Column 2 shows our results in the front view. Column 3 shows the results from viewpoints different from Column 1.</p>
Full article ">Figure 6
<p>Comparison against Kinect: (<b>a</b>) skeletons from Kinect; (<b>b</b>,<b>c</b>) skeletons of our reconstructed model projections to the RGB images from the left camera and the right camera; the red dots are the joints of the model, and blue dots in (<b>b</b>,<b>c</b>) are the centers of the corresponding bones.</p>
Full article ">Figure 7
<p>Comparison against [<a href="#B15-sensors-15-24297" class="html-bibr">15</a>]: (<b>a</b>,<b>b</b>) the reconstructed SCAPE model’s projection respectively using the depth data from the left and the right camera with the method of [<a href="#B15-sensors-15-24297" class="html-bibr">15</a>]; (<b>c</b>,<b>d</b>) our reconstructed model’s projection using the depth data from the left and the right camera at the same time.</p>
Full article ">Figure 8
<p>OptiTrack system setup for ground truth data.</p>
Full article ">Figure 9
<p>Quantitative comparison of the average reconstruction error. The vertical axis represents the error of joint position discrepancy, and its coordinate unit is mm. The horizontal axis successively shows the name of all of the joints, except that Column 14 shows the average error.</p>
Full article ">Figure 10
<p>Difficult poses. Figures on the top are RGB images from the left camera. Figures on the bottom are reconstructed models from the front view.</p>
Full article ">Figure 11
<p>Skeletons of difficult poses obtained by the OptiTrack system, Xu’s monocular setup [<a href="#B15-sensors-15-24297" class="html-bibr">15</a>], the Kinect SDK [<a href="#B20-sensors-15-24297" class="html-bibr">20</a>] and our two-Kinect system. The first column shows the RGB images from the left camera. Column 2 shows a comparison of our reconstructed poses and that from the OptiTrack system, where the red skeletons are from the OptiTrack system and the blue are our reconstructed results. Red skeletons in Column 3 are also from the OptiTrack system, and the black ones are derived by Xu’s method [<a href="#B15-sensors-15-24297" class="html-bibr">15</a>]. In Column 4, the green skeletons are obtained from the Kinect SDK, while the red are from the OptiTrack system.</p>
Full article ">
2431 KiB  
Article
An Enhanced Error Model for EKF-Based Tightly-Coupled Integration of GPS and Land Vehicle’s Motion Sensors
by Tashfeen B. Karamat, Mohamed M. Atia and Aboelmagd Noureldin
Sensors 2015, 15(9), 24269-24296; https://doi.org/10.3390/s150924269 - 22 Sep 2015
Cited by 17 | Viewed by 6931
Abstract
Reduced inertial sensor systems (RISS) have been introduced by many researchers as a low-cost, low-complexity sensor assembly that can be integrated with GPS to provide a robust integrated navigation system for land vehicles. In earlier works, the developed error models were simplified based [...] Read more.
Reduced inertial sensor systems (RISS) have been introduced by many researchers as a low-cost, low-complexity sensor assembly that can be integrated with GPS to provide a robust integrated navigation system for land vehicles. In earlier works, the developed error models were simplified based on the assumption that the vehicle is mostly moving on a flat horizontal plane. Another limitation is the simplified estimation of the horizontal tilt angles, which is based on simple averaging of the accelerometers’ measurements without modelling their errors or tilt angle errors. In this paper, a new error model is developed for RISS that accounts for the effect of tilt angle errors and the accelerometer’s errors. Additionally, it also includes important terms in the system dynamic error model, which were ignored during the linearization process in earlier works. An augmented extended Kalman filter (EKF) is designed to incorporate tilt angle errors and transversal accelerometer errors. The new error model and the augmented EKF design are developed in a tightly-coupled RISS/GPS integrated navigation system. The proposed system was tested on real trajectories’ data under degraded GPS environments, and the results were compared to earlier works on RISS/GPS systems. The findings demonstrated that the proposed enhanced system introduced significant improvements in navigational performance. Full article
(This article belongs to the Special Issue Advances on Resources Management for Multi-Platform Infrastructures)
Show Figures

Figure 1

Figure 1
<p>High level block diagram of system integration through estimation techniques.</p>
Full article ">Figure 2
<p>Arrangement of 3D reduced inertial sensor system (RISS) sensors with respect to the body frame.</p>
Full article ">Figure 3
<p>3D RISS mechanization block diagram.</p>
Full article ">Figure 4
<p>EKF-based RISS/GPS tightly-coupled integrated navigation system.</p>
Full article ">Figure 5
<p>Navigation equipment and supporting hardware placed in a van for trajectory data collection.</p>
Full article ">Figure 6
<p>Test trajectory in Kingston downtown, ON, Canada.</p>
Full article ">Figure 7
<p>Maximum 2D positional errors for GPS outages of the Kingston downtown trajectory.</p>
Full article ">Figure 8
<p>A comparison of tightly-coupled (TC-EKF), TC-linearized KF (LKF) and particle filter (PF) for the average maximum 2D position error for the Kingston downtown trajectory.</p>
Full article ">Figure 9
<p>A comparison of TC-EKF with TC-LKF for the average RMS 2D position error for the Kingston downtown trajectory.</p>
Full article ">Figure 10
<p>Noisy GPS during the Kingston downtown trajectory.</p>
Full article ">Figure 11
<p>(Top) Performance of estimated pitch angle <span class="html-italic">versus</span> the reference (NovAtel) for the Kingston trajectory. The bottom plot shows the difference between the two systems.</p>
Full article ">Figure 12
<p>(Top) Performance of estimated roll angle <span class="html-italic">versus</span> the reference (NovAtel) for the Kingston trajectory. The bottom plot shows the difference between the two systems.</p>
Full article ">Figure 13
<p>Gyroscope bias estimation convergence in the Kingston downtown trajectory.</p>
Full article ">Figure 14
<p>Main section of the test trajectory in downtown Toronto, ON, Canada.</p>
Full article ">Figure 15
<p>Availability of the satellites during the Toronto downtown trajectory.</p>
Full article ">Figure 16
<p>Maximum 2D positional errors of TC-EKF, the LC-LKF of [<a href="#B28-sensors-15-24269" class="html-bibr">28</a>], the TC-LKF of [<a href="#B14-sensors-15-24269" class="html-bibr">14</a>] and the PF of [<a href="#B31-sensors-15-24269" class="html-bibr">31</a>].</p>
Full article ">Figure 17
<p>Performance in noisy GPS signal conditions during outage # 2, 3 and 4.</p>
Full article ">Figure 18
<p>Performance in noisy and jeopardized GPS signal conditions during outage #7.</p>
Full article ">Figure 19
<p>(Top) Performance of estimated pitch angle <span class="html-italic">versus</span> the reference for the Toronto trajectory. The bottom plot shows the difference between the two systems.</p>
Full article ">Figure 20
<p>(Top) Performance of estimated roll angle <span class="html-italic">versus</span> the reference for the Toronto trajectory. The bottom plot shows the difference between the two systems.</p>
Full article ">Figure 21
<p>Convergence of gyroscope bias during the Toronto downtown trajectory.</p>
Full article ">
743 KiB  
Article
A Lateral Differential Resonant Pressure Microsensor Based on SOI-Glass Wafer-Level Vacuum Packaging
by Bo Xie, Yonghao Xing, Yanshuang Wang, Jian Chen, Deyong Chen and Junbo Wang
Sensors 2015, 15(9), 24257-24268; https://doi.org/10.3390/s150924257 - 21 Sep 2015
Cited by 24 | Viewed by 6259
Abstract
This paper presents the fabrication and characterization of a resonant pressure microsensor based on SOI-glass wafer-level vacuum packaging. The SOI-based pressure microsensor consists of a pressure-sensitive diaphragm at the handle layer and two lateral resonators (electrostatic excitation and capacitive detection) on the device [...] Read more.
This paper presents the fabrication and characterization of a resonant pressure microsensor based on SOI-glass wafer-level vacuum packaging. The SOI-based pressure microsensor consists of a pressure-sensitive diaphragm at the handle layer and two lateral resonators (electrostatic excitation and capacitive detection) on the device layer as a differential setup. The resonators were vacuum packaged with a glass cap using anodic bonding and the wire interconnection was realized using a mask-free electrochemical etching approach by selectively patterning an Au film on highly topographic surfaces. The fabricated resonant pressure microsensor with dual resonators was characterized in a systematic manner, producing a quality factor higher than 10,000 (~6 months), a sensitivity of about 166 Hz/kPa and a reduced nonlinear error of 0.033% F.S. Based on the differential output, the sensitivity was increased to two times and the temperature-caused frequency drift was decreased to 25%. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Top view of the proposed pressure micro sensor, including a patterned SOI wafer and a glass cap, which are bonded together as vacuum packaging; (<b>b</b>) Detailed view of the resonator, which consists of a doubly clamped resonant beam, a driving electrode, and a sensing electrode. The resonator works in the lateral mode which is excited based on electrostatic forces and detected based on capacitance; (<b>c</b>) Bottom view of the micro sensor where gold pads on the device layer are metalized for interconnection through via holes formed on the handle layer.</p>
Full article ">Figure 2
<p>Device fabrication flow charts: (<b>a</b>,<b>b</b>) Patterning the ZnO film based on e-beam evaporation and lift-off process; (<b>c</b>,<b>d</b>) Patterning photoresist on the ZnO film as a mask to form via holes; (<b>e</b>,<b>f</b>) Forming via holes and diaphragm using the patterned ZnO film as hard mask; (<b>g</b>–<b>j</b>) Fabricating resonators on the device layer based on photolithography and dry etching; (<b>k</b>) Removing the oxide layer in vapor HF to release resonators; (<b>l</b>) Vacuum packaging of the fabricated silicon wafer and the glass cap wafer using anodic bonding.</p>
Full article ">Figure 3
<p>(<b>a</b>) Lateral pull-in damage (collapses between the resonant beam and driving/sensing electrode) after the step of anodic bonding; (<b>b</b>) The deposition of an Al film on the handle layer before anodic bonding can effectively avoid the pull-in damage.</p>
Full article ">Figure 4
<p>The schematic of selective pad patterning in via holes based on the electrochemical dissolution of gold. (<b>a</b>) Cr/Au film deposition; (<b>b</b>) Electrochemical etching. (<b>c</b>) Gold portions in via holes were preserved.</p>
Full article ">Figure 5
<p>(<b>a</b>) A picture of the micro-fabricated devices; (<b>b</b>) A SEM picture of a via hole after wire bonding, where the gold portion on the handle layer was thoroughly removed with the gold portion left on the device layer for wire bonding.</p>
Full article ">Figure 6
<p>(<b>a</b>) The quality factor of the proposed pressure micro sensor was quantified as 11,396; (<b>b</b>) The quality factor was higher than 10,000 within six months of device fabrication, which confirms the reliability of the SOI-glass wafer-level vacuum packaging.</p>
Full article ">Figure 7
<p>(<b>a</b>) The schematic of the closed-loop circuit used as a self-oscillator for frequency detection; (<b>b</b>) The setup used to characterize the sensor performance as functions of both pressure and temperature.</p>
Full article ">Figure 8
<p>(<b>a</b>) The intrinsic resonant frequencies of these two resonators as a function of pressure, producing a sensitivity of about 166 Hz/kPa and a linearity with a coefficient of 0.9999998 based on the differential setup; (<b>b</b>) Two resonators demonstrated similar temperature drifts and the temperature caused frequency drift was reduced by the differential output.</p>
Full article ">Figure 9
<p>In response to temperature changes, the quantified errors using the differential frequency output were about four times smaller than the values obtained using the single resonator.</p>
Full article ">Figure 10
<p>(<b>a</b>) The quantified error of the developed micro sensor in a temperature range from −40 °C to 60 °C and a pressure range from 50 kPa to 110 kPa; (<b>b</b>) The compared results of the proposed micro pressure sensor in the study and a quartz resonant pressure sensor in Fluck Fluke PPC4 in response to environmental pressure variations; (<b>c</b>) The performance of the developed pressure microsensor in response to regulated small pressure changes (+10 Pa, −5 Pa, +3 Pa, −2 Pa, +1 Pa, −1 Pa, +2 Pa).</p>
Full article ">Figure 10 Cont.
<p>(<b>a</b>) The quantified error of the developed micro sensor in a temperature range from −40 °C to 60 °C and a pressure range from 50 kPa to 110 kPa; (<b>b</b>) The compared results of the proposed micro pressure sensor in the study and a quartz resonant pressure sensor in Fluck Fluke PPC4 in response to environmental pressure variations; (<b>c</b>) The performance of the developed pressure microsensor in response to regulated small pressure changes (+10 Pa, −5 Pa, +3 Pa, −2 Pa, +1 Pa, −1 Pa, +2 Pa).</p>
Full article ">
716 KiB  
Article
Cell Selection Game for Densely-Deployed Sensor and Mobile Devices In 5G Networks Integrating Heterogeneous Cells and the Internet of Things
by Lusheng Wang, Yamei Wang, Zhizhong Ding and Xiumin Wang
Sensors 2015, 15(9), 24230-24256; https://doi.org/10.3390/s150924230 - 18 Sep 2015
Cited by 13 | Viewed by 6474
Abstract
With the rapid development of wireless networking technologies, the Internet of Things and heterogeneous cellular networks (HCNs) tend to be integrated to form a promising wireless network paradigm for 5G. Hyper-dense sensor and mobile devices will be deployed under the coverage of heterogeneous [...] Read more.
With the rapid development of wireless networking technologies, the Internet of Things and heterogeneous cellular networks (HCNs) tend to be integrated to form a promising wireless network paradigm for 5G. Hyper-dense sensor and mobile devices will be deployed under the coverage of heterogeneous cells, so that each of them could freely select any available cell covering it and compete for resource with others selecting the same cell, forming a cell selection (CS) game between these devices. Since different types of cells usually share the same portion of the spectrum, devices selecting overlapped cells can experience severe inter-cell interference (ICI). In this article, we study the CS game among a large amount of densely-deployed sensor and mobile devices for their uplink transmissions in a two-tier HCN. ICI is embedded with the traditional congestion game (TCG), forming a congestion game with ICI (CGI) and a congestion game with capacity (CGC). For the three games above, we theoretically find the circular boundaries between the devices selecting the macrocell and those selecting the picocells, indicated by the pure strategy Nash equilibria (PSNE). Meanwhile, through a number of simulations with different picocell radii and different path loss exponents, the collapse of the PSNE impacted by severe ICI (i.e., a large number of picocell devices change their CS preferences to the macrocell) is profoundly revealed, and the collapse points are identified. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>Scenario with multiple picocells and densely-deployed devices in a single macrocell.</p>
Full article ">Figure 2
<p>System model. Each picocell holds a circular boundary (<span class="html-italic">i.e.</span>, the azure dashed circle) indicating the PSNE. Device A is inside the boundary, so it transmits to the picocell and interferes with the macrocell. Although Device B is within the edge of the picocell, it is outside of the dashed circle, so it selects the macrocell and interferes with the picocell.</p>
Full article ">Figure 3
<p>Two-dimensional case of the traditional congestion game (TCG) for the proof of Theorem 1.</p>
Full article ">Figure 4
<p>Monotonicity of payoff functions in Theorems 2 and 3. Obtained by setting <math display="inline"> <mrow> <msub> <mi>B</mi> <mi>i</mi> </msub> <mo>=</mo> <mn>2</mn> <mo>/</mo> <mn>3</mn> <msub> <mi>B</mi> <mn>0</mn> </msub> </mrow> </math>, <math display="inline"> <mrow> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>=</mo> <mn>300</mn> </mrow> </math> m, <math display="inline"> <mrow> <mi>w</mi> <mo>=</mo> <mn>3</mn> <mo>×</mo> <msup> <mn>10</mn> <mn>6</mn> </msup> </mrow> </math> and other parameters commonly configured as in <a href="#sec5-sensors-15-24230" class="html-sec">Section 5</a>, for a scenario with one picocell located within the macrocell. Curves are obtained by searching for the PSNE, so the number of devices selecting the picocell gradually decreases during iterations. (<b>a</b>) Cost in Theorem 2; (<b>b</b>) capacity in Theorem 3.</p>
Full article ">Figure 5
<p>Verification of the theorems. (<b>a</b>) Theorem 1; (<b>b</b>) Theorem 2; (<b>c</b>) Theorem 3.</p>
Full article ">Figure 6
<p>Impact of ICI on CS preference and spatial spectral efficiency using TCG and the congestion game with ICI (CGI). (<b>a</b>) Path Loss 2 (free space); (<b>b</b>) Path Loss 2 (free space); (<b>c</b>) Path Loss 3 (suburbs); (<b>d</b>) Path Loss 3 (suburbs); (<b>e</b>) Path Loss 4 (downtown); (<b>f</b>) Path Loss 4 (downtown).</p>
Full article ">Figure 7
<p>Impact of ICI on CS preference and spatial spectral efficiency using CGC-fractional frequency reuse (FFR) and CGC-orthogonal frequency division (OFD). (<b>a</b>) Path Loss 2 (free space); (<b>b</b>) Path Loss 2 (free space); (<b>c</b>) Path Loss 3 (suburbs); (<b>d</b>) Path Loss 3 (suburbs); (<b>e</b>) Path Loss 4 (downtown); (<b>f</b>) Path Loss 4 (downtown).</p>
Full article ">Figure 8
<p>The polar coordinate systems for calculating the ICI. (<b>a</b>) To calculate <span class="html-italic">x</span>; (<b>b</b>) to calculate <span class="html-italic">y</span>.</p>
Full article ">
3384 KiB  
Article
A Fiber Bragg Grating Sensing Based Triaxial Vibration Sensor
by Tianliang Li, Yuegang Tan, Yi Liu, Yongzhi Qu, Mingyao Liu and Zude Zhou
Sensors 2015, 15(9), 24214-24229; https://doi.org/10.3390/s150924214 - 18 Sep 2015
Cited by 27 | Viewed by 6855
Abstract
A fiber Bragg grating (FBG) sensing based triaxial vibration sensor has been presented in this paper. The optical fiber is directly employed as elastomer, and the triaxial vibration of a measured body can be obtained by two pairs of FBGs. A model of [...] Read more.
A fiber Bragg grating (FBG) sensing based triaxial vibration sensor has been presented in this paper. The optical fiber is directly employed as elastomer, and the triaxial vibration of a measured body can be obtained by two pairs of FBGs. A model of a triaxial vibration sensor as well as decoupling principles of triaxial vibration and experimental analyses are proposed. Experimental results show that: sensitivities of 86.9 pm/g, 971.8 pm/g and 154.7 pm/g for each orthogonal sensitive direction with linearity are separately 3.64%, 1.50% and 3.01%. The flat frequency ranges reside in 20–200 Hz, 3–20 Hz and 4–50 Hz, respectively; in addition, the resonant frequencies are separately 700 Hz, 40 Hz and 110 Hz in the x/y/z direction. When the sensor is excited in a single direction vibration, the outputs of sensor in the other two directions are consistent with the outputs in the non-working state. Therefore, it is effectively demonstrated that it can be used for three-dimensional vibration measurement. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram and photograph of an FBG based triaxial vibration sensor.</p>
Full article ">Figure 2
<p>Vibration model of optical fiber in axial direction.</p>
Full article ">Figure 3
<p>Vibration model of optical fiber in transverse direction.</p>
Full article ">Figure 4
<p>Schematic diagram and photograph of sensing characteristics experiment.</p>
Full article ">Figure 5
<p>Time domain signal of each FBG under <span class="html-italic">x</span>-direction excitation with the acceleration—10 m/s<sup>2</sup>-200 Hz.</p>
Full article ">Figure 6
<p>Response of the three directions under <span class="html-italic">x</span>-direction excitation with the acceleration—10 m/s<sup>2</sup>-200 Hz. (<b>a</b>) Time domain response map. (<b>b</b>) Spectrum map.</p>
Full article ">Figure 7
<p>Amplitude-frequency curve of the sensor for the <span class="html-italic">x</span>-vibration direction.</p>
Full article ">Figure 8
<p>Amplitude-frequency curves of each direction under <span class="html-italic">x</span>-direction excitation.</p>
Full article ">Figure 9
<p>Time domain signal of each FBG under <span class="html-italic">y</span>-direction excitation with the acceleration—1m/s<sup>2</sup>-16Hz.</p>
Full article ">Figure 10
<p>Response of the three directions under <span class="html-italic">y</span>-direction excitation with the acceleration— 1m/s<sup>2</sup>-16Hz. (<b>a</b>) Time domain response map. (<b>b</b>) Spectrum map.</p>
Full article ">Figure 11
<p>Amplitude-frequency curves of each direction under <span class="html-italic">y</span>-direction excitation.</p>
Full article ">Figure 12
<p>Time domain signal of each FBG under <span class="html-italic">z</span>-direction excitation with the acceleration—1 m/s<sup>2</sup>-8 Hz.</p>
Full article ">Figure 13
<p>Response of the three directions under <span class="html-italic">z</span>-direction excitation with the acceleration—1 m/s<sup>2</sup>-8 Hz. (<b>a</b>) Time domain response map. (<b>b</b>) Spectrum map.</p>
Full article ">Figure 14
<p>Amplitude-frequency curves of each direction under <span class="html-italic">z</span>-direction excitation.</p>
Full article ">Figure 15
<p>Difference value Δ<span class="html-italic">λ</span><sub>1</sub> − Δ<span class="html-italic">λ</span><sub>2</sub> versus acceleration <span class="html-italic">a<sub>x</sub></span> under <span class="html-italic">x</span>-direction excitation with the acceleration frequency of 100 Hz. (<b>a</b>) Relation between Δ<span class="html-italic">λ</span><sub>1</sub> − Δ<span class="html-italic">λ</span><sub>2</sub> and acceleration <span class="html-italic">a<sub>x</sub></span> within 5–25 m/s<sup>2</sup>. (<b>b</b>) The linear fitting curve.</p>
Full article ">Figure 16
<p>Addition value Δ<span class="html-italic">λ</span><sub>1</sub> + Δ<span class="html-italic">λ</span><sub>2</sub> versus acceleration <span class="html-italic">a<sub>y</sub></span> under <span class="html-italic">y</span>-direction excitation with the acceleration frequency of 8 Hz. (<b>a</b>) Relation between Δ<span class="html-italic">λ</span><sub>1</sub> + Δ<span class="html-italic">λ</span><sub>2</sub> and acceleration <span class="html-italic">a<sub>y</sub></span> within 1–4.5 m/s<sup>2</sup>. (<b>b</b>) The linear fitting curve.</p>
Full article ">Figure 17
<p>Addition value Δ<span class="html-italic">λ</span><sub>3</sub> + Δ<span class="html-italic">λ</span><sub>4</sub> versus acceleration <span class="html-italic">a<sub>z</sub></span> under <span class="html-italic">z</span>-direction excitation with the acceleration frequency of 8 Hz. (<b>a</b>) Relation between Δλ<sub>3</sub> + Δλ<sub>4</sub> and acceleration <span class="html-italic">a<sub>z</sub></span> within 1–4.5 m/s<sup>2</sup>. (<b>b</b>) The linear fitting curve.</p>
Full article ">Figure 18
<p>Response of each FBG and three directions without excitation. (<b>a</b>) Time domain signal of each FBG without excitation. (<b>b</b>) Response of the three directions without excitation.</p>
Full article ">
4194 KiB  
Article
A Self-Adaptive Dynamic Recognition Model for Fatigue Driving Based on Multi-Source Information and Two Levels of Fusion
by Wei Sun, Xiaorui Zhang, Srinivas Peeta, Xiaozheng He, Yongfu Li and Senlai Zhu
Sensors 2015, 15(9), 24191-24213; https://doi.org/10.3390/s150924191 - 18 Sep 2015
Cited by 25 | Viewed by 5846
Abstract
To improve the effectiveness and robustness of fatigue driving recognition, a self-adaptive dynamic recognition model is proposed that incorporates information from multiple sources and involves two sequential levels of fusion, constructed at the feature level and the decision level. Compared with existing models, [...] Read more.
To improve the effectiveness and robustness of fatigue driving recognition, a self-adaptive dynamic recognition model is proposed that incorporates information from multiple sources and involves two sequential levels of fusion, constructed at the feature level and the decision level. Compared with existing models, the proposed model introduces a dynamic basic probability assignment (BPA) to the decision-level fusion such that the weight of each feature source can change dynamically with the real-time fatigue feature measurements. Further, the proposed model can combine the fatigue state at the previous time step in the decision-level fusion to improve the robustness of the fatigue driving recognition. An improved correction strategy of the BPA is also proposed to accommodate the decision conflict caused by external disturbances. Results from field experiments demonstrate that the effectiveness and robustness of the proposed model are better than those of models based on a single fatigue feature and/or single-source information fusion, especially when the most effective fatigue features are used in the proposed model. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Framework of the fatigue recognition model based on multi-source information and two levels of fusion.</p>
Full article ">Figure 2
<p>Structure of T-SFNN.</p>
Full article ">Figure 3
<p>Experiment route.</p>
Full article ">Figure 4
<p>Fatigue feature measurement. (<b>a</b>) F measurement; (<b>b</b>) ECD measurement; (<b>c</b>) MEOL measurement; (<b>d</b>) YF measurement; (<b>e</b>) PNS measurement; (<b>f</b>) SDSA measurement; (<b>g</b>) FADL measurement; (<b>h</b>) SDVS measurement.</p>
Full article ">Figure 4 Cont.
<p>Fatigue feature measurement. (<b>a</b>) F measurement; (<b>b</b>) ECD measurement; (<b>c</b>) MEOL measurement; (<b>d</b>) YF measurement; (<b>e</b>) PNS measurement; (<b>f</b>) SDSA measurement; (<b>g</b>) FADL measurement; (<b>h</b>) SDVS measurement.</p>
Full article ">Figure 5
<p>MSE curves based on IPSO for T-SFNN-1 and T-SFNN-2.</p>
Full article ">
635 KiB  
Article
Continuous-Wave Stimulated Emission Depletion Microscope for Imaging Actin Cytoskeleton in Fixed and Live Cells
by Bhanu Neupane, Tao Jin, Liliana F. Mellor, Elizabeth G. Loboa, Frances S. Ligler and Gufeng Wang
Sensors 2015, 15(9), 24178-24190; https://doi.org/10.3390/s150924178 - 18 Sep 2015
Cited by 10 | Viewed by 6758
Abstract
Stimulated emission depletion (STED) microscopy provides a new opportunity to study fine sub-cellular structures and highly dynamic cellular processes, which are challenging to observe using conventional optical microscopy. Using actin as an example, we explored the feasibility of using a continuous wave (CW)-STED [...] Read more.
Stimulated emission depletion (STED) microscopy provides a new opportunity to study fine sub-cellular structures and highly dynamic cellular processes, which are challenging to observe using conventional optical microscopy. Using actin as an example, we explored the feasibility of using a continuous wave (CW)-STED microscope to study the fine structure and dynamics in fixed and live cells. Actin plays an important role in cellular processes, whose functioning involves dynamic formation and reorganization of fine structures of actin filaments. Frequently used confocal fluorescence and STED microscopy dyes were employed to image fixed PC-12 cells (dyed with phalloidin- fluorescein isothiocyante) and live rat chondrosarcoma cells (RCS) transfected with actin-green fluorescent protein (GFP). Compared to conventional confocal fluorescence microscopy, CW-STED microscopy shows improved spatial resolution in both fixed and live cells. We were able to monitor cell morphology changes continuously; however, the number of repetitive analyses were limited primarily by the dyes used in these experiments and could be improved with the use of dyes less susceptible to photobleaching. In conclusion, CW-STED may disclose new information for biological systems with a proper characteristic length scale. The challenges of using CW-STED microscopy to study cell structures are discussed. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>CW-STED microscopy. (<b>A</b>) Schematic of our home-built CW-STED microscope; (<b>B</b>) Confocal images of 45 nm FITC-doped polystyrene nanoparticles; (<b>C</b>) CW-STED image in the same area. The 2 µm scale bar applies to both images; (<b>D</b>) and (<b>E</b>) Representative cross-sections of the confocal and CW-STED images of the 45 nm particles.</p>
Full article ">Figure 2
<p>Confocal and CW-STED images of fixed PC-12 cells stained with phalloidin-FITC. (<b>A</b>) Confocal image of PC-12 cell with different sized neurites; (<b>B</b>) Expanded views of three selected neurites of different diameters; B and B’ refer to confocal and STED images, respectively; (<b>C</b>) A pair of confocal and STED images of another typical neurite showing an increase in geometric resolution for the STED image.</p>
Full article ">Figure 3
<p>Confocal and STED images of live rat chondrocyte cells transfected with green fluorescent protein. (<b>A</b>) Confocal image of chondrocyte cells; (<b>B</b>) Confocal image of selected cells; (<b>C</b>–<b>F</b>) STED images of the same cells at different times. The scale bar in (B) also applies to (C–F).</p>
Full article ">Figure 4
<p>Fluorescence intensity in CW-STED imaging. (<b>A</b>) Comparison of confocal and STED image intensity. The intensity was measured from the maximum intensity of five cell features with similar intensities in <a href="#sensors-15-24178-f003" class="html-fig">Figure 3</a>. The error bar is calculated from the standard deviation of the intensity of the five cell features; (<b>B</b>) Photobleaching of STED image upon multiple exposures to the depletion laser beam. Black curve: the average intensity of the whole image in <a href="#sensors-15-24178-f003" class="html-fig">Figure 3</a>. The other five curves: the maximum intensity of five selected cell features in <a href="#sensors-15-24178-f003" class="html-fig">Figure 3</a> at different exposures. The total image intensity decays smoothly while the cell feature intensity fluctuated slightly because of the movement of the live cell.</p>
Full article ">
888 KiB  
Article
Evaluation of Content-Matched Range Monitoring Queries over Moving Objects in Mobile Computing Environments
by HaRim Jung, MoonBae Song, Hee Yong Youn and Ung Mo Kim
Sensors 2015, 15(9), 24143-24177; https://doi.org/10.3390/s150924143 - 18 Sep 2015
Cited by 7 | Viewed by 5369
Abstract
A content-matched (CM) rangemonitoring query overmoving objects continually retrieves the moving objects (i) whose non-spatial attribute values are matched to given non-spatial query values; and (ii) that are currently located within a given spatial query range. In this paper, we propose a new [...] Read more.
A content-matched (CM) rangemonitoring query overmoving objects continually retrieves the moving objects (i) whose non-spatial attribute values are matched to given non-spatial query values; and (ii) that are currently located within a given spatial query range. In this paper, we propose a new query indexing structure, called the group-aware query region tree (GQR-tree) for efficient evaluation of CMrange monitoring queries. The primary role of the GQR-tree is to help the server leverage the computational capabilities of moving objects in order to improve the system performance in terms of the wireless communication cost and server workload. Through a series of comprehensive simulations, we verify the superiority of the GQR-tree method over the existing methods. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>An example of the safe region and resident domain.</p>
Full article ">Figure 2
<p>System overview.</p>
Full article ">Figure 3
<p>Example of content-matched (CM) range monitoring queries.</p>
Full article ">Figure 4
<p>An example of query grouping.</p>
Full article ">Figure 5
<p>Classification of the overlap relationship. (<b>a</b>) <math display="inline"> <mrow> <msub> <mi>q</mi> <mn>1</mn> </msub> <mo>.</mo> <mi>R</mi> </mrow> </math> and <math display="inline"> <mrow> <msub> <mi>q</mi> <mn>2</mn> </msub> <mo>.</mo> <mi>R</mi> </mrow> </math> cover <span class="html-italic">N</span>; (<b>b</b>) <math display="inline"> <mrow> <msub> <mi>q</mi> <mn>1</mn> </msub> <mo>.</mo> <mi>R</mi> </mrow> </math> and <math display="inline"> <mrow> <msub> <mi>q</mi> <mn>2</mn> </msub> <mo>.</mo> <mi>R</mi> </mrow> </math> are covered by <span class="html-italic">N</span>; (<b>c</b>) <math display="inline"> <mrow> <msub> <mi>q</mi> <mn>1</mn> </msub> <mo>.</mo> <mi>R</mi> </mrow> </math> and <math display="inline"> <mrow> <msub> <mi>q</mi> <mn>2</mn> </msub> <mo>.</mo> <mi>R</mi> </mrow> </math> partially intersect <span class="html-italic">N</span>; (<b>d</b>) <math display="inline"> <mrow> <msub> <mi>q</mi> <mn>1</mn> </msub> <mo>.</mo> <mi>R</mi> </mrow> </math> and <math display="inline"> <mrow> <msub> <mi>q</mi> <mn>2</mn> </msub> <mo>.</mo> <mi>R</mi> </mrow> </math> equal <span class="html-italic">N</span>.</p>
Full article ">Figure 6
<p>An example of the sub-GQR-tree.</p>
Full article ">Figure 7
<p>An example of the Group-Aware Query Region (GQR)-tree.</p>
Full article ">Figure 8
<p>CPU-time <span class="html-italic">vs</span>. cardinalities of <span class="html-italic">Uniform</span> and <span class="html-italic">Skewed</span>. (<b>a</b>) <span class="html-italic">Uniform</span>; (<b>b</b>) <span class="html-italic">Skewed</span>.</p>
Full article ">Figure 9
<p>Total number of messages <span class="html-italic">vs</span>. cardinalities of <span class="html-italic">Uniform</span> and <span class="html-italic">Skewed</span>. (<b>a</b>) <span class="html-italic">Uniform</span>; (<b>b</b>) <span class="html-italic">Skewed</span>.</p>
Full article ">Figure 10
<p>CPU-time <span class="html-italic">vs</span>. size of spatial query ranges. (<b>a</b>) <span class="html-italic">Uniform</span>; (<b>b</b>) <span class="html-italic">Skewed</span>.</p>
Full article ">Figure 11
<p>Total number of messages <span class="html-italic">vs</span>. size of spatial query ranges. (<b>a</b>) <span class="html-italic">Uniform</span>; (<b>b</b>) <span class="html-italic">Skewed</span>.</p>
Full article ">Figure 12
<p>CPU-time <span class="html-italic">vs</span>. number of moving objects. (<b>a</b>) <span class="html-italic">Uniform</span>; (<b>b</b>) <span class="html-italic">Skewed</span>.</p>
Full article ">Figure 13
<p>Total number of messages <span class="html-italic">vs</span>. number of moving objects. (<b>a</b>) <span class="html-italic">Uniform</span>; (<b>b</b>) <span class="html-italic">Skewed</span>.</p>
Full article ">Figure 14
<p>CPU-time <span class="html-italic">vs</span>. number of non-spatial attributes. (<b>a</b>) <span class="html-italic">Uniform</span>; (<b>b</b>) <span class="html-italic">Skewed</span>.</p>
Full article ">Figure 15
<p>Total number of messages <span class="html-italic">vs</span>. number of non-spatial attributes. (<b>a</b>) <span class="html-italic">Uniform</span>; (<b>b</b>) <span class="html-italic">Skewed</span>.</p>
Full article ">
2003 KiB  
Article
Self-Adaptive Strategy Based on Fuzzy Control Systems for Improving Performance in Wireless Sensors Networks
by Vicente Hernández Díaz, José-Fernán Martínez, Néstor Lucas Martínez and Raúl M. Del Toro
Sensors 2015, 15(9), 24125-24142; https://doi.org/10.3390/s150924125 - 18 Sep 2015
Cited by 8 | Viewed by 6546
Abstract
The solutions to cope with new challenges that societies have to face nowadays involve providing smarter daily systems. To achieve this, technology has to evolve and leverage physical systems automatic interactions, with less human intervention. Technological paradigms like Internet of Things (IoT) and [...] Read more.
The solutions to cope with new challenges that societies have to face nowadays involve providing smarter daily systems. To achieve this, technology has to evolve and leverage physical systems automatic interactions, with less human intervention. Technological paradigms like Internet of Things (IoT) and Cyber-Physical Systems (CPS) are providing reference models, architectures, approaches and tools that are to support cross-domain solutions. Thus, CPS based solutions will be applied in different application domains like e-Health, Smart Grid, Smart Transportation and so on, to assure the expected response from a complex system that relies on the smooth interaction and cooperation of diverse networked physical systems. The Wireless Sensors Networks (WSN) are a well-known wireless technology that are part of large CPS. The WSN aims at monitoring a physical system, object, (e.g., the environmental condition of a cargo container), and relaying data to the targeted processing element. The WSN communication reliability, as well as a restrained energy consumption, are expected features in a WSN. This paper shows the results obtained in a real WSN deployment, based on SunSPOT nodes, which carries out a fuzzy based control strategy to improve energy consumption while keeping communication reliability and computational resources usage among boundaries. Full article
(This article belongs to the Special Issue Cyber-Physical Systems)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Control system design for self-adapting the Wireless Sensors Networks (WSN) nodes transmission power considering the number of neighbors and the battery level.</p>
Full article ">Figure 2
<p>Fuzzy transfer functions. (<b>a</b>) FDM1; (<b>b</b>) FDM2.</p>
Full article ">Figure 3
<p>Software components accomplishing the self-adaptive system improving the energy consumption while keeping the communication connectivity.</p>
Full article ">Figure 4
<p>UML sequence diagram depicting basic components interaction.</p>
Full article ">Figure 5
<p>Experiment deployment; (<b>a</b>) One of the sensor nodes used in the experiments; (<b>b</b>) Deployment area.</p>
Full article ">Figure 6
<p>Experiment deployment.</p>
Full article ">Figure 7
<p>Evolution of the total charge per experiment, relative to the first value of each one.</p>
Full article ">Figure 8
<p>Slope (<math display="inline"> <msub> <mi>β</mi> <mi>e</mi> </msub> </math>) comparison between experiments.</p>
Full article ">Figure 9
<p><math display="inline"> <msub> <mi>J</mi> <mi>e</mi> </msub> </math> comparison between experiments.</p>
Full article ">Figure 10
<p>Evolution of the total transmission error rate per experiment.</p>
Full article ">Figure 11
<p><math display="inline"> <msub> <mi>J</mi> <mi>c</mi> </msub> </math> for total transmission error rate per experiment comparison.</p>
Full article ">Figure 12
<p>Round transmission error rate per experiment.</p>
Full article ">Figure 13
<p><math display="inline"> <msub> <mi>J</mi> <mi>c</mi> </msub> </math> for current transmission error rate per round per experiment comparison.</p>
Full article ">
1258 KiB  
Article
Inline Measurement of Particle Concentrations in Multicomponent Suspensions using Ultrasonic Sensor and Least Squares Support Vector Machines
by Xiaobin Zhan, Shulan Jiang, Yili Yang, Jian Liang, Tielin Shi and Xiwen Li
Sensors 2015, 15(9), 24109-24124; https://doi.org/10.3390/s150924109 - 18 Sep 2015
Cited by 14 | Viewed by 6163
Abstract
This paper proposes an ultrasonic measurement system based on least squares support vector machines (LS-SVM) for inline measurement of particle concentrations in multicomponent suspensions. Firstly, the ultrasonic signals are analyzed and processed, and the optimal feature subset that contributes to the best model [...] Read more.
This paper proposes an ultrasonic measurement system based on least squares support vector machines (LS-SVM) for inline measurement of particle concentrations in multicomponent suspensions. Firstly, the ultrasonic signals are analyzed and processed, and the optimal feature subset that contributes to the best model performance is selected based on the importance of features. Secondly, the LS-SVM model is tuned, trained and tested with different feature subsets to obtain the optimal model. In addition, a comparison is made between the partial least square (PLS) model and the LS-SVM model. Finally, the optimal LS-SVM model with the optimal feature subset is applied to inline measurement of particle concentrations in the mixing process. The results show that the proposed method is reliable and accurate for inline measuring the particle concentrations in multicomponent suspensions and the measurement accuracy is sufficiently high for industrial application. Furthermore, the proposed method is applicable to the modeling of the nonlinear system dynamically and provides a feasible way to monitor industrial processes. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Schematic diagram of the ultrasonic system for the measurement of suspensions concentration; (<b>b</b>) The full time domain signals of pure water; (<b>c</b>) The windowed, averaged and zero-padded time domain signals of (<b>b</b>); (<b>d</b>) The frequency spectrum of (<b>c</b>).</p>
Full article ">Figure 2
<p>A flow chart of the whole process of modeling and inline measurement.</p>
Full article ">Figure 3
<p>The experimental design of the samples: the black solid squares represent the training subset; the red hollow squares stand for the prediction subset; the blue triangles denote the inline test subset.</p>
Full article ">Figure 4
<p>(<b>a</b>) The scatter plot of principal components (PC1/PC2); (<b>b</b>) The scatter plot of the leverage and the studentized residual.</p>
Full article ">Figure 5
<p>The importance of each feature.</p>
Full article ">Figure 6
<p>The contour plot of RMSEV <span class="html-italic">versus</span> γ and σ<sup>2</sup> in the grid search for (<b>a</b>) <span class="html-italic">c</span><sub>k</sub> and (<b>b</b>) <span class="html-italic">c</span><sub>t</sub>. The grids “·” and “+” are 10 × 10 in the first step and the second step, respectively. The color stand for the value of RMSEV.</p>
Full article ">Figure 7
<p><math display="inline"> <semantics> <mrow> <mover accent="true"> <mrow> <mtext>RMSEP</mtext> </mrow> <mo stretchy="true">¯</mo> </mover> </mrow> </semantics> </math> of each feature subset against the number of PCs</p>
Full article ">Figure 8
<p>The predicted values <span class="html-italic">vs.</span> the actual values using (<b>a</b>) the optimal PLS model and (<b>b</b>) the optimal LS-SVM model.</p>
Full article ">Figure 9
<p>The result of inline measurement: (<b>a</b>) The real-time measurement concentration; (<b>b</b>) The XY error bar graph of the averaged <span class="html-italic">c</span><sub>k</sub> and <span class="html-italic">c</span><sub>t</sub> in 5 min.</p>
Full article ">
1036 KiB  
Article
Three-Component Decomposition Based on Stokes Vector for Compact Polarimetric SAR
by Hanning Wang, Zhimin Zhou, John Turnbull, Qian Song and Feng Qi
Sensors 2015, 15(9), 24087-24108; https://doi.org/10.3390/s150924087 - 18 Sep 2015
Cited by 11 | Viewed by 5411
Abstract
In this paper, a three-component decomposition algorithm is proposed for processing compact polarimetric SAR images. By using the correspondence between the covariance matrix and the Stokes vector, three-component scattering models for CTLR and DCP modes are established. The explicit expression of decomposition results [...] Read more.
In this paper, a three-component decomposition algorithm is proposed for processing compact polarimetric SAR images. By using the correspondence between the covariance matrix and the Stokes vector, three-component scattering models for CTLR and DCP modes are established. The explicit expression of decomposition results is then derived by setting the contribution of volume scattering as a free parameter. The degree of depolarization is taken as the upper bound of the free parameter, for the constraint that the weighting factor of each scattering component should be nonnegative. Several methods are investigated to estimate the free parameter suitable for decomposition. The feasibility of this algorithm is validated by AIRSAR data over San Francisco and RADARSAT-2 data over Flevoland. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>Three-component decomposition under fully polarimetric (FP) mode (the original data is smoothed by a 7 × 7 pixel window): (<b>a</b>) pseudo color image and (<b>b</b>) classification.</p>
Full article ">Figure 2
<p>Three-component decomposition under circular transmission while linear reception (CTLR) mode when <span class="html-italic">p</span> = 1 (the original data is smoothed by a 7 × 7 pixel window): (<b>a</b>) pseudo color image and (<b>b</b>) classification.</p>
Full article ">Figure 3
<p>Averaged conformity degree for the whole image (ADI) under different values of <span class="html-italic">p.</span></p>
Full article ">Figure 4
<p>Three-component decomposition under CTLR mode when <span class="html-italic">p</span> = 0.65 (the original data is smoothed by a 7 × 7 pixel window): (<b>a</b>) pseudo color image and (<b>b</b>) classification.</p>
Full article ">Figure 5
<p>Three-component decomposition under CTLR mode after reconstruction (the original data is smoothed by a 7 × 7 pixel window): (<b>a</b>) pseudo color image and (<b>b</b>) classification.</p>
Full article ">Figure 6
<p>Three-component decomposition using Cloude compact polarimetric (CP) decomposition (the original data is smoothed by a 7 × 7 pixel window): (<b>a</b>) pseudo color image and (<b>b</b>) classification.</p>
Full article ">Figure 7
<p>Three-component decomposition using <math display="inline"> <semantics> <mrow> <mi>m</mi> <mo>−</mo> <mi>δ</mi> </mrow> </semantics> </math> decomposotion (the original data is smoothed by a 7 × 7 pixel window): (<b>a</b>) pseudo color image and (<b>b</b>) classification.</p>
Full article ">Figure 8
<p>Three-component decomposition under FP mode (the original data is smoothed by a 7 × 7 pixel window): (<b>a</b>) pseudo color image and (<b>b</b>) classification.</p>
Full article ">Figure 9
<p>Three-component decomposition under FP mode when <span class="html-italic">p</span> = 1 (the original data is smoothed by a 7 × 7 pixel window): (<b>a</b>) pseudo color image and (<b>b</b>) classification.</p>
Full article ">Figure 10
<p>ADI under different values of <span class="html-italic">p.</span></p>
Full article ">Figure 11
<p>Three-component decomposition under CTLR mode when <span class="html-italic">p</span> = 0.65 (The original data is smoothed by a 7 × 7 pixel window): (<b>a</b>) pseudo color image and (<b>b</b>) classification.</p>
Full article ">Figure 12
<p>Three-component decomposition after reconstruction (the original data is smoothed by a 7 × 7 pixel window). (<b>a</b>) Pseudo color image; (<b>b</b>) Classification.</p>
Full article ">Figure 13
<p>Three-component decomposition using Cloude CP decomposition (the original data is smoothed by a 7 × 7 pixel window): (<b>a</b>) pseudo color image and (<b>b</b>) classification.</p>
Full article ">Figure 14
<p>Three-component decomposition using <math display="inline"> <semantics> <mrow> <mi>m</mi> <mo>−</mo> <mi>δ</mi> </mrow> </semantics> </math> decomposition (The original data is smoothed by a 7 × 7 pixel window). (<b>a</b>) pseudo color image and (<b>b</b>) Classification.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop