Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 20, January-1
Previous Issue
Volume 19, December-1
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 19, Issue 24 (December-2 2019) – 246 articles

Cover Story (view full-size image): Wireless Sensor Networks (WSN) are mostly used for environmental monitoring, military surveillance, healthcare and industrial applications. Multichannel communications in WSN can enhance reliable transmissions, alleviate hidden/exposed terminal problems, minimize network interference, and support parallel data communications. In a synchronous protocol, all nodes wake up at the same time, while in the asynchronous protocol, nodes have different wake up times. Thus, the multichannel synchronous protocol is most suitable for real-time data transmission. In this work, a new channel access mechanism is designed to reduce power consumption during communications in any control channel, and performance analysis models are developed to evaluate the delay, throughput, reliability, and packet drop rate of the proposed MAC as compared to the DSME MAC of IEEE 802.15.4e.View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
19 pages, 9553 KiB  
Article
Low-Voltage Low-Pass and Band-Pass Elliptic Filters Based on Log-Domain Approach Suitable for Biosensors
by Pipat Prommee, Natapong Wongprommoon, Montree Kumngern and Winai Jaikla
Sensors 2019, 19(24), 5581; https://doi.org/10.3390/s19245581 - 17 Dec 2019
Cited by 7 | Viewed by 5054
Abstract
This research proposes bipolar junction transistor (BJT)-based log-domain high-order elliptic ladder low-pass (LPF) and band-pass filters (BPF) using a lossless differentiator and lossless and lossy integrators. The log-domain lossless differentiator was realized by using seven BJTs and one grounded capacitor, the lossy integrator [...] Read more.
This research proposes bipolar junction transistor (BJT)-based log-domain high-order elliptic ladder low-pass (LPF) and band-pass filters (BPF) using a lossless differentiator and lossless and lossy integrators. The log-domain lossless differentiator was realized by using seven BJTs and one grounded capacitor, the lossy integrator using five BJTs and one grounded capacitor, and the lossless integrator using seven BJTs and one grounded capacitor. The simplified signal flow graph (SFG) of the elliptic ladder LPF consisted of two lossy integrators, one lossless integrator, and one lossless differentiator, while that of the elliptic ladder BPF contained two lossy integrators, five lossless integrators, and one lossless differentiator. Log-domain cells were directly incorporated into the simplified SFGs. Simulations were carried out using PSpice with transistor array HFA3127. The proposed filters are operable in a low-voltage environment and are suitable for mobile equipment and further integration. The log-domain principle enables the frequency responses of the filters to be electronically tunable between 10k Hz–10 MHz. The proposed filters are applicable for low-frequency biosensors by reconfiguring certain capacitors. The filters can efficiently remove low-frequency noise and random noise in the electrocardiogram (ECG) signal. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>RLC elliptic ladder low-pass filter (LPF) prototype.</p>
Full article ">Figure 2
<p>Signal flow graph of the RLC elliptic ladder LPF as shown in <a href="#sensors-19-05581-f001" class="html-fig">Figure 1</a>.</p>
Full article ">Figure 3
<p>RLC elliptic ladder band-pass filter (BPF) prototype.</p>
Full article ">Figure 4
<p>Signal flow graph of the RLC elliptic ladder BPF.</p>
Full article ">Figure 5
<p>Lossless differentiator realization.</p>
Full article ">Figure 6
<p>(<b>a</b>) Basic log-domain cell, (<b>b</b>) complete circuit of log-domain lossless differentiator.</p>
Full article ">Figure 7
<p>Log-domain lossy integrator and its block diagram.</p>
Full article ">Figure 8
<p>Block diagram of lossless integrator.</p>
Full article ">Figure 9
<p>Log-domain lossless integrator.</p>
Full article ">Figure 10
<p>Normalized signal flow graph (SFG) of elliptic ladder LPF.</p>
Full article ">Figure 11
<p>Simplified SFG of elliptic ladder LPF.</p>
Full article ">Figure 12
<p>The proposed log-domain elliptic ladder LPF.</p>
Full article ">Figure 13
<p>Normalized SFG of BPF.</p>
Full article ">Figure 14
<p>Simplified SFG of BPF.</p>
Full article ">Figure 15
<p>The proposed log-domain elliptic ladder BPF.</p>
Full article ">Figure 16
<p>The small signal model of bipolar junction transistor (BJT).</p>
Full article ">Figure 17
<p>Magnitude responses of lossless differentiator under varying I<sub>B</sub>.</p>
Full article ">Figure 18
<p>Magnitude responses of lossless integrator under varying <span class="html-italic">I<sub>B</sub></span>.</p>
Full article ">Figure 19
<p>Magnitude responses of lossy integrator under varying <span class="html-italic">I<sub>B</sub></span>.</p>
Full article ">Figure 20
<p>Comparison between magnitude responses of the proposed high-order elliptic ladder LPF and the RLC LPF prototype.</p>
Full article ">Figure 21
<p>Magnitude responses of the proposed high-order elliptic ladder LPF under varying <span class="html-italic">I<sub>B</sub></span>.</p>
Full article ">Figure 22
<p>Multi-tone testing of the proposed high-order elliptic ladder LPF.</p>
Full article ">Figure 23
<p>Frequency response of proposed high-order elliptic ladder LPF for biosensor signal.</p>
Full article ">Figure 24
<p>Noise output analysis of proposed high-order elliptic ladder LPF.</p>
Full article ">Figure 25
<p>Reconstructed electrocardiogram (ECG) signal using proposed high-order elliptic ladder LPF.</p>
Full article ">Figure 26
<p>Comparison between magnitude responses of the proposed high-order elliptic ladder BPF and the RLC BPF prototype.</p>
Full article ">Figure 27
<p>Magnitude responses of the proposed high-order elliptic ladder BPF under varying <span class="html-italic">I<sub>B.</sub></span></p>
Full article ">Figure 28
<p>Multi-tone testing of the proposed high-order elliptic ladder BPF.</p>
Full article ">Figure 29
<p>Frequency response of proposed high-order elliptic ladder BPF for biosensor signal.</p>
Full article ">Figure 30
<p>Noise output analysis of proposed high-order elliptic ladder BPF.</p>
Full article ">Figure 31
<p>Reconstructed ECG signal using proposed high-order elliptic ladder BPF.</p>
Full article ">
8 pages, 2543 KiB  
Article
Measurement of Switching Performance of Pixelated Silicon Sensor Integrated with Field Effect Transistor
by Hyeyoung Lee, Jin-A Jeon, Jinyong Kim, Hyunsu Lee, Moo Hyun Lee, Manwoo Lee, Seungcheol Lee, Hwanbae Park and Sukjune Song
Sensors 2019, 19(24), 5580; https://doi.org/10.3390/s19245580 - 17 Dec 2019
Cited by 1 | Viewed by 2960
Abstract
Silicon shows very high detection efficiency for low-energy photons, and the silicon pixel sensor provides high spatial resolution. Pixelated silicon sensors facilitate the direct detection of low-energy X-ray radiation. In this study, we developed junction field effect transistors (JFETs) that can be integrated [...] Read more.
Silicon shows very high detection efficiency for low-energy photons, and the silicon pixel sensor provides high spatial resolution. Pixelated silicon sensors facilitate the direct detection of low-energy X-ray radiation. In this study, we developed junction field effect transistors (JFETs) that can be integrated into a pixelated silicon sensor to effectively handle many signal readout channels due to the pixelated structure without any change in the sensor resolution; this capability of the integrated system arises from the pixelated structure of the sensor. We focused on optimizing the JFET’s switching function, and simulated JFETs with different fabrication parameters. Furthermore, prototype JFET switches were designed and fabricated on the basis of the simulated results. It is important not only to keep the low leakage currents in the JFET but also reduce the current flow as much as possible by providing a high resistance when the JFET switch is off. We determined the optimal fabrication conditions for the effective switching of the JFETs. In this paper, we present the results of the measurement of the switching capability of the fabricated JFETs for various design variables and fabrication conditions. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Schematic and (<b>b</b>) equivalent electronic circuit of a pixelated silicon sensor.</p>
Full article ">Figure 2
<p>Two-dimensional simulation profile of a junction field effect transistor (JFET) in the pixelated silicon sensor.</p>
Full article ">Figure 3
<p>I–V characteristics of a JFET for various gate voltages (V<math display="inline"><semantics> <msub> <mrow/> <mi>g</mi> </msub> </semantics></math>) in the simulation. The circular drawings show the JFET design parameters: A and B spaces.</p>
Full article ">Figure 4
<p>Design of a JFET photomask for the pixelated silicon sensor.</p>
Full article ">Figure 5
<p>Photograph of junction field effect transistors (JEFTs) of various sizes in the fabricated pixelated silicon sensors.</p>
Full article ">Figure 6
<p>I–V characteristics of the fabricated JFET with an area of 100 × 100 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>m for various gate voltages (<math display="inline"><semantics> <msub> <mi>V</mi> <mi>g</mi> </msub> </semantics></math>). The inset shows the I–V characteristics of JFETs with different sizes but with the same design values of the A and B spaces.</p>
Full article ">Figure 7
<p>Switching function results of the simulation and for the fabricated JFETs, based on the A and B spaces.</p>
Full article ">
12 pages, 848 KiB  
Article
A Hybrid Lab-on-a-Chip Injector System for Autonomous Carbofuran Screening
by Aristeidis S. Tsagkaris, Jana Pulkrabova, Jana Hajslova and Daniel Filippini
Sensors 2019, 19(24), 5579; https://doi.org/10.3390/s19245579 - 17 Dec 2019
Cited by 20 | Viewed by 4729
Abstract
Securing food safety standards is crucial to protect the population from health-threatening food contaminants. In the case of pesticide residues, reference procedures typically find less than 1% of tested samples being contaminated, thus indicating the necessity for new tools able to support smart [...] Read more.
Securing food safety standards is crucial to protect the population from health-threatening food contaminants. In the case of pesticide residues, reference procedures typically find less than 1% of tested samples being contaminated, thus indicating the necessity for new tools able to support smart and affordable prescreening. Here, we introduce a hybrid paper–lab-on-a-chip platform, which integrates on-demand injectors to perform multiple step protocols in a single disposable device. Simultaneous detection of enzymatic color response in sample and reference cells, using a regular smartphone, enabled semiquantitative detection of carbofuran, a neurotoxic and EU-banned carbamate pesticide, in a wide concentration range. The resulting evaluation procedure is generic and allows the rejection of spurious measurements based on their dynamic responses, and was effectively applied for the binary detection of carbofuran in apple extracts. Full article
(This article belongs to the Special Issue Lab-on-a-Chip and Microfluidic Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Architecture of the unibody-LOC (ULOC) device with four injectors and the functional paper membrane containing the colorimetric assay. (<b>b</b>) Ellman’s assay detection principle implemented on the paper strip, which also acts as porous barrier for the hybrid injector. (<b>c</b>) A ULOC device in the holder with the screen showing the acquired image and a depiction of the device and the holder construction. Note: AChE = acetylcholinesterase; DTNB = 5,5′-dithio bis-2-nitrobenzoic acid.</p>
Full article ">Figure 2
<p>Data processing illustration of an acquired video: (<b>a</b>) one frame of the video acquisition showing the identification of regions of interest (ROIs) and the collection of the blue camera intensity across 20,000 frames acquired at 30 fps; (<b>b</b>) average intensity signal of each ROI, the discounted <span class="html-italic">to</span> data, and indication of incubation time; (<b>c</b>) aligned signals indicating the response (yellow hue).</p>
Full article ">Figure 3
<p>(<b>a</b>) Curve fit responses to concentrations between 0.010 mg L<sup>−1</sup> to 5.0 mg L<sup>−1</sup> of carbofuran in phosphate buffer saline (PBS); (<b>b</b>) model representing the observed response behavior and acceptance band for each semiquantitative category; (<b>c</b>) projection of acceptance bands and experimental response showing automatic categorization ranges and rejection of a measurement outside valid ranges (0.100 mg L<sup>−1</sup> in this case). Note: In all cases, green font corresponds to 5.0 mg L<sup>−1</sup>, blue font corresponds to 1.0 mg L<sup>−1</sup>, magenta font corresponds to 0.500 mg L<sup>−1</sup>, yellow font corresponds to 0.100 mg L<sup>−1</sup>, black font corresponds to 0.050 mg L<sup>−1</sup> and red font corresponds to 0.010 mg L<sup>−1</sup>. PC1 = first principal component; PC2 = second principal component.</p>
Full article ">Figure 4
<p>(<b>a</b>) Model representing the observed response behavior and acceptance band for the binary discrimination of carbofuran in apple extracts. (<b>b</b>) Principal component analysis (PCA) scores for blank (n = 4) and contaminated apple extracts (n = 4), indicating the effective separation between the two groups.</p>
Full article ">
26 pages, 17904 KiB  
Article
Analysis and Comparison of GPS Precipitable Water Estimates between Two Nearby Stations on Tahiti Island
by Fangzhao Zhang, Jean-Pierre Barriot, Guochang Xu and Marania Hopuare
Sensors 2019, 19(24), 5578; https://doi.org/10.3390/s19245578 - 17 Dec 2019
Cited by 4 | Viewed by 3038
Abstract
Since Bevis first proposed Global Positioning System (GPS) meteorology in 1992, the precipitable water (PW) estimates retrieved from Global Navigation Satellite System (GNSS) networks with high accuracy have been widely used in many meteorological applications. The proper estimation of GNSS PW can be [...] Read more.
Since Bevis first proposed Global Positioning System (GPS) meteorology in 1992, the precipitable water (PW) estimates retrieved from Global Navigation Satellite System (GNSS) networks with high accuracy have been widely used in many meteorological applications. The proper estimation of GNSS PW can be affected by the GNSS processing strategy as well as the local geographical properties of GNSS sites. To better understand the impact of these factors, we compare PW estimates from two nearby permanent GPS stations (THTI and FAA1) in the tropical Tahiti Island, a basalt shield volcano located in the South Pacific, with a mean slope of 8% and a diameter of 30 km. The altitude difference between the two stations is 86.14 m, and their horizontal distance difference is 2.56 km. In this paper, Bernese GNSS Software Version 5.2 with precise point positioning (PPP) and Vienna mapping function 1 (VMF1) was applied to estimate the zenith tropospheric delay (ZTD), which was compared with the International GNSS Service (IGS) Final products. The meteorological parameters sourced from the European Center for Medium-Range Weather Forecasts (ECMWF) and the local weighted mean temperature ( T m ) model were used to estimate the GPS PW for three years (May 2016 to April 2019). The results show that the differences of PW between two nearby GPS stations is nearly a constant with value 1.73 mm. In our case, this difference is mainly driven by insolation differences, the difference in altitude and the wind being only second factors. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>Location of the two nearby GPS stations (THTI (149.61° W, 17.58° S, ellipsoidal altitude 98.49 m) and FAA1 (149.62° W, 17.56° S, ellipsoidal altitude 12.35 m) in the Tahiti Island. (<b>a</b>) Tahiti Nui shield volcano (30 km in diameter) and Tahiti Iti volcano (15 km diameter), joined by the isthmus of Taravao. Tahiti is located in the South Pacific, at mid-distance from South America to Australia. The contour lines of the topography are every 200 m, with the limits of hydrographic units, mostly radial valleys, indicated by bold lines. (<b>b</b>) Enlargement of (<b>a</b>), near the two stations, with contour lines of the topography every 5 m. The airstrip is clearly visible on the enlargement, close to the FAA1 station. The calderas (volcano pits) are indicated in the figures as well as hydrological units (watersheds).</p>
Full article ">Figure 2
<p>Comparisons of our THTI ZTD values (light-green dots) with IGS THTI ZTD (red dots), and monthly averaged estimates of our THTI ZTD (green dots) and IGS THTI ZTD (magenta dots) (<b>a</b>) and our FAA1 ZTD (light-green dots) with IGS FAA1 ZTD (red dots)), and monthly averaged estimates of our FAA1 ZTD (green dots) and IGS FAA1 ZTD (magenta dots) (<b>c</b>), and the histograms of IQ (intelligence quotient) for the corresponding ZTD differences between GPS THTI ZTD and IGS THTI ZTD (<b>b</b>) and between GPS FAA1 ZTD and IGS FAA1 ZTD (<b>d</b>).</p>
Full article ">Figure 2 Cont.
<p>Comparisons of our THTI ZTD values (light-green dots) with IGS THTI ZTD (red dots), and monthly averaged estimates of our THTI ZTD (green dots) and IGS THTI ZTD (magenta dots) (<b>a</b>) and our FAA1 ZTD (light-green dots) with IGS FAA1 ZTD (red dots)), and monthly averaged estimates of our FAA1 ZTD (green dots) and IGS FAA1 ZTD (magenta dots) (<b>c</b>), and the histograms of IQ (intelligence quotient) for the corresponding ZTD differences between GPS THTI ZTD and IGS THTI ZTD (<b>b</b>) and between GPS FAA1 ZTD and IGS FAA1 ZTD (<b>d</b>).</p>
Full article ">Figure 3
<p>Comparisons between our THTI ZTD estimates (light-green dots) and our FAA1 ZTD estimates (red dots), and monthly averaged estimates of the THTI ZTD (green dots) and FAA1 ZTD (magenta dots) (<b>a</b>), and IGS THTI ZTD estimates (light-green dots) versus IGS FAA1 ZTD estimates (red dots), and monthly averaged estimates of the IGS THTI ZTD (green dots) and IGS FAA1 ZTD (magenta dots) (<b>c</b>), and the corresponding ZTD differences between THTI ZTDs and FAA1 ZTDs (<b>b</b>) and between IGS THTI ZTDs and IGS FAA1 ZTDs (<b>d</b>).</p>
Full article ">Figure 3 Cont.
<p>Comparisons between our THTI ZTD estimates (light-green dots) and our FAA1 ZTD estimates (red dots), and monthly averaged estimates of the THTI ZTD (green dots) and FAA1 ZTD (magenta dots) (<b>a</b>), and IGS THTI ZTD estimates (light-green dots) versus IGS FAA1 ZTD estimates (red dots), and monthly averaged estimates of the IGS THTI ZTD (green dots) and IGS FAA1 ZTD (magenta dots) (<b>c</b>), and the corresponding ZTD differences between THTI ZTDs and FAA1 ZTDs (<b>b</b>) and between IGS THTI ZTDs and IGS FAA1 ZTDs (<b>d</b>).</p>
Full article ">Figure 4
<p>Comparisons of ECMWF pressure (light-green dots) with local pressure (red dots), and 10 days averaged estimates of the ECMWF pressure (green dots) and local pressure (magenta dots) (<b>a</b>), and ECMWF temperature (light-green dots) with local temperature (red dots), and 10 days averaged estimates of the ECMWF temperature (green dots) and local temperature (magenta dots) (<b>b</b>), and the time series of ZHD derived from ECMWF pressure (light green dots) and local pressure (red dots), and every 10 days averaged estimates of the ECMWF ZHD (green dots) and local ZHD (magenta dots) (<b>c</b>), and the histogram of ZHD difference (<b>d</b>) during the whole year of 2018. The temporal resolution is one hour.</p>
Full article ">Figure 5
<p>Comparisons of THTI (light-green dots) and FAA1 (red dots) ECMWF pressures, and monthly averaged estimates of the THTI (green dots) and FAA1 (magenta dots) ECMWF pressures (<b>a</b>) and temperatures (<b>c</b>), and the pressure difference from standard model (−10.30 hPa, green dots) and the corresponding pressure difference from ECMWF (<b>b</b>) the temperature difference from standard model (−0.56 K, green dots) and the corresponding temperature difference from ECMWF (<b>d</b>). The temporal resolution is six hours.</p>
Full article ">Figure 6
<p>Comparisons of THTI ZWD (light-green dots) with FAA1 ZWD (red dots), and monthly averaged estimates of the THTI ZWD (green dots) and FAA1 ZWD (magenta dots) (<b>a</b>) and the ZWD difference between them (<b>b</b>). The temporal resolution is one hour.</p>
Full article ">Figure 7
<p>Comparisons of THTI PW values (magenta dots) based on all seasons’ <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi>m</mi> </msub> </mrow> </semantics></math> model with THTI Dry PW (cyan dots) and THTI Wet PW (light-green dots) based on the corresponding dry and wet season’s <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi>m</mi> </msub> <mo> </mo> </mrow> </semantics></math> models, and monthly averaged estimates of the THTI PW (red dots) and THTI Dry PW (blue dots) and THTI Wet PW (green dots) (<b>a</b>) and their differences (<b>b</b>), and comparisons of FAA1 PW (magenta dots) based on all seasons’ <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi>m</mi> </msub> </mrow> </semantics></math> model with FAA1 Dry PW (cyan dots) and FAA1 Wet PW (light-green dots) based on the corresponding dry and wet season’s <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi>m</mi> </msub> <mo> </mo> </mrow> </semantics></math> models, and monthly averaged estimates of the FAA1 PW (red dots) and FAA1 Dry PW (blue dots) and FAA1 Wet PW (green dots) (<b>c</b>) and their differences (<b>d</b>).</p>
Full article ">Figure 8
<p>Comparisons of THTI PW (light-green dots) with FAA1 PW estimates (magenta dots) based on an all seasons’ <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi>m</mi> </msub> </mrow> </semantics></math> model, and monthly averaged estimates of the THTI PW (green dots) and FAA1 PW (red dots) (<b>a</b>) and their differences (<b>b</b>), and based on seasonal <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi>m</mi> </msub> </mrow> </semantics></math> models (<b>c</b>) and their differences (<b>d</b>).</p>
Full article ">Figure 9
<p>QQ plots of THTI ZWD (<b>a</b>) and THTI PW (<b>b</b>) with normal law, FAA1 ZWD (<b>c</b>) and FAA1 PW (<b>d</b>) with normal law, and QQ plots of cross-comparisons between THTI and FAA1, their ZWD difference with normal law (<b>e</b>) and their PW difference with normal law (<b>f</b>), and their ZWD difference with THTI ZWD (<b>g</b>) and their PW difference with THTI PW (<b>h</b>), and their ZWD differences with FAA1 ZWD (<b>i</b>), and their PW differences with FAA1 PW (<b>j</b>). The red lines are linear quartile–quartile estimates of the fit to be expected if the two distributions are linearly related.</p>
Full article ">Figure 9 Cont.
<p>QQ plots of THTI ZWD (<b>a</b>) and THTI PW (<b>b</b>) with normal law, FAA1 ZWD (<b>c</b>) and FAA1 PW (<b>d</b>) with normal law, and QQ plots of cross-comparisons between THTI and FAA1, their ZWD difference with normal law (<b>e</b>) and their PW difference with normal law (<b>f</b>), and their ZWD difference with THTI ZWD (<b>g</b>) and their PW difference with THTI PW (<b>h</b>), and their ZWD differences with FAA1 ZWD (<b>i</b>), and their PW differences with FAA1 PW (<b>j</b>). The red lines are linear quartile–quartile estimates of the fit to be expected if the two distributions are linearly related.</p>
Full article ">Figure 10
<p>The power spectra of THTI ZWD (blue curve), FAA1 ZWD (red curve) and their differences (green curve) (<b>a</b>), and (<b>b</b>) the power spectrum of ZWD differences (enlarged green curve of (<b>a</b>)), the black line is the spectrum of a white noise matching the data noise; and the power spectra of THTI PW (blue curve), FAA1 PW (red curve) and their differences (green curve) (<b>c</b>), and (<b>d</b>) the power spectra of PW differences (enlarged green curve of (<b>c</b>)). The black line is the spectrum of white noise. Sub-diurnal variations cannot be retrieved, as they are buried in the noise.</p>
Full article ">Figure 11
<p>Comparisons of surface temperature from NOAA (red dots) and ECMWF (light-green dots), and 10 days averaged temperature estimates from NOAA (magenta dots) and ECMWF (green dots) (<b>a</b>), and the comparison of local THTI (light-green dots) temperature and NOAA FAA1 (red dots) temperature, and 10 days averaged temperature estimates of local THTI (green dots) and NOAA FAA1 (magenta dots) (<b>b</b>). The comparison is done for the whole year of 2018.</p>
Full article ">Figure 12
<p>Comparisons of PW differences from GPS (FAA1-THTI, light-green dots) and their monthly averaged values (green dots) and exponentially derived PW (PWE) estimates (Equation (15)), with <span class="html-italic">n<sub>s</sub></span> values for THTI (blue dots) and FAA1 (red dots) (<b>a</b>), and the respective fits of GPS-PW differences based on Equation (15) (<b>b</b>). The black line corresponds to the one-to-one relationship between the GPS-PW differences and the PWE differences.</p>
Full article ">Figure 13
<p>Variations of the averaged ZWD values of THTI (red dots) and FAA1 (blue dots) and the wind velocity values from NOAA (green dots) and RS (cyan dots) (<b>a</b>), and for PW values (<b>c</b>), and the variations of the corresponding averaged ZWD differences (red dots) between two stations with the wind velocity from NOAA (green dots) and RS (cyan dots) (<b>b</b>), and for PW differences (<b>d</b>).</p>
Full article ">Figure 14
<p>Ten m wind rose for 2017 and 2018 at FAA1 station: (<b>a</b>) during the day time from 8:00 to 17:00, and (<b>b</b>) during the night time from 20:00 to 05:00.</p>
Full article ">Figure 15
<p>Histogram of 10 m wind direction at FAA1 for 2017–2018 (<b>a</b>) for 0 to 2 m/s, and (<b>b</b>) for 2 to 4 m/s, and (<b>c</b>) for 4 to 10 m/s. North-east direction is indicated by “NE” and spans 0° to 90°, south-east direction is indicated by “SE” and spans 90° to 180°, south-west direction is indicated by “SW” and spans 180° to 270°, north-west direction is indicated by “NW” and spans 270° to 360°. The corresponding wind rose is shown in <a href="#sensors-19-05578-f014" class="html-fig">Figure 14</a>.</p>
Full article ">Figure 15 Cont.
<p>Histogram of 10 m wind direction at FAA1 for 2017–2018 (<b>a</b>) for 0 to 2 m/s, and (<b>b</b>) for 2 to 4 m/s, and (<b>c</b>) for 4 to 10 m/s. North-east direction is indicated by “NE” and spans 0° to 90°, south-east direction is indicated by “SE” and spans 90° to 180°, south-west direction is indicated by “SW” and spans 180° to 270°, north-west direction is indicated by “NW” and spans 270° to 360°. The corresponding wind rose is shown in <a href="#sensors-19-05578-f014" class="html-fig">Figure 14</a>.</p>
Full article ">Figure 16
<p>Hourly insolation variation from the pyranometer collocated with the FAA1 station, showing the strong annual signature driven by the Sun sky trajectory. The PW data relative to FAA1 station (see <a href="#sec6dot7-sensors-19-05578" class="html-sec">Section 6.7</a>) are shown in red dots. A time shift of about two months between the two series is clearly visible, probably linked to the thermal inertia of the soil or/and a time delay in the vegetation response (evapotranspiration) to the insolation.</p>
Full article ">Figure 17
<p>(<b>a</b>) Time series, over the three years of this study, of the ratio of PW values, defined as (PW(THTI)−PW(FAA1))/PW(FAA1). The mean value of this ratio is −0.0355 ± 0.0002. (<b>b</b>) Fourier analysis of the time series of subfigure (<b>a</b>). The annual component is largely attenuated with respect to the annual component in the time series of (PW(THTI)−PW(FAA1)), as seen in <a href="#sensors-19-05578-f010" class="html-fig">Figure 10</a>d, pointing to a common factor with annual periodicity in the original time series of PW values for the two sites.</p>
Full article ">
17 pages, 7101 KiB  
Article
Pedestrian Dead Reckoning-Assisted Visual Inertial Odometry Integrity Monitoring
by Yuqin Wang, Ao Peng, Zhichao Lin, Lingxiang Zheng and Huiru Zheng
Sensors 2019, 19(24), 5577; https://doi.org/10.3390/s19245577 - 17 Dec 2019
Cited by 5 | Viewed by 4734
Abstract
Visual inertial odometers (VIOs) have received increasing attention in the area of indoor positioning due to the universality and convenience of the camera. However, the visual observation of VIO is more susceptible to the environment, and the error of observation affects the final [...] Read more.
Visual inertial odometers (VIOs) have received increasing attention in the area of indoor positioning due to the universality and convenience of the camera. However, the visual observation of VIO is more susceptible to the environment, and the error of observation affects the final positioning accuracy. To address this issue, we analyzed the causes of visual observation error that occur under different scenarios and their impact on positioning accuracy. We propose a new method of using the short-time reliability of pedestrian dead reckoning (PDR) to aid in visual integrity monitoring and to reduce positioning error. The proposed method selects optimized positioning by automatically switching between outputs from VIO and PDR. Experiments were carried out to test and evaluate the proposed PDR-assisted visual integrity monitoring. The sensor suite of experiments consisted of a stereo camera and an inertial measurement unit (IMU). Results were analyzed in detailed and indicated that the proposed system performs better for indoor positioning within an environment that contains low illumination, little background texture information, or few moving objects. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

Figure 1
<p>The full pipeline of the visual inertial odometer.</p>
Full article ">Figure 2
<p>The error diagram of the geometric relationship of the feature points. The accuracy of the position estimation depends on the error in the pose estimation and geometric angle of the observation, and the shaded part indicates the uncertainty of the position estimation.</p>
Full article ">Figure 3
<p>Pedestrian Dead Reckon-assisted visual integrity testing. ε is the error range of PDR positioning. When the positioning result of the VIO system exceeds the error range of the PDR, the PDR result is used instead of the VIO result.</p>
Full article ">Figure 4
<p>The device used for the indoor experiment. It contains one stereo camera (ZED, 30 HZ) with a 672 × 376 resolution.</p>
Full article ">Figure 5
<p>(<b>a</b>) The red line is the number of feature points with the threshold set to 20 per frame, and the green line is the number of sparse features with the threshold set to 60 per frame. (<b>b</b>) The red cross marks the starting point, point A is the endpoint of the track in the original state of the feature point, and point B is the endpoint of the track when feature points were sparse.</p>
Full article ">Figure 6
<p>(<b>a</b>) The average gray value of the top image is 97.0845, the average gray value of the bottom image is 183.946, and the number of features extracted by the top image is 946. The number of features extracted by the bottom image is 1543. (<b>b</b>) A long line pointed by the red arrow is that the different light of the left and right cameras resulted in no feature points and serious deviations in the trajectory.</p>
Full article ">Figure 7
<p>(<b>a</b>) The feature point distribution is controlled in the red area. The yellow circle represents all the extracted feature points, and the blue line segment represents the tracking track of the feature points. (<b>b</b>) The red cross marks the starting point, the C point is the endpoint of the track where the feature point distribution was normal, and the D point is the track end where the feature point was unevenly distributed.</p>
Full article ">Figure 8
<p>(<b>a</b>) Open circles represent moving feature points; solid circles represent stationary feature points. (<b>b</b>) The red cross marks the starting point and the E point is the original track. The point F is the endpoint of the trajectory affected by the moving feature points.</p>
Full article ">Figure 9
<p>(<b>a</b>) The curve of the update frequency of visual observations; (<b>b</b>) the estimation of the gyroscope bias.</p>
Full article ">Figure 10
<p>(<b>a</b>) Comparison of the trajectories of the Multi-States Constrained Kalman Filter (MSCKF) and MSCKF+PDR. The green color is the MSCKF and the blue color is the MSCKF+PDR. The yellow area is an enlarged view of the path of the stair area, where the red part is the output path of the PDR assisted. (<b>b</b>) The picture of the stair scene and the yellow point is the extracted feature point.</p>
Full article ">Figure 11
<p>(<b>a</b>) Comparison of the trajectories of MSCKF and MSCKF+PDR. (<b>b</b>) The pedestrian walking, the yellow points are the extracted feature points, and the green is the tracking path of the feature point.</p>
Full article ">Figure 12
<p>(<b>a</b>) Comparison of the trajectories of MSCKF and MSCKF+PDR. (<b>b</b>) The picture of the stair scene and the yellow points are the extracted feature points.</p>
Full article ">Figure 13
<p>(<b>a</b>) Comparison of the positioning error of MSCKF, PDR, and our system. (<b>b</b>) CDF of the position result of MSCK, PDR, and our system.</p>
Full article ">Figure 14
<p>The map had four floors. Green dots represent real landmark points that were calibrated in advance. The blue route is the positioning result of our system.</p>
Full article ">Figure 15
<p>The positioning results of each floor are displayed. The blue track is the positioning result of our system, and the green point is the real landmark point. The red line indicates the error distance between the actual landmark and the system anchor point.</p>
Full article ">Figure 16
<p>The CDF of the position result of our system.</p>
Full article ">
13 pages, 2255 KiB  
Article
Enhanced Performance of Reagent-Less Carbon Nanodots Based Enzyme Electrochemical Biosensors
by Iria Bravo, Cristina Gutiérrez-Sánchez, Tania García-Mendiola, Mónica Revenga-Parra, Félix Pariente and Encarnación Lorenzo
Sensors 2019, 19(24), 5576; https://doi.org/10.3390/s19245576 - 17 Dec 2019
Cited by 11 | Viewed by 3585
Abstract
This work reports on the advantages of using carbon nanodots (CNDs) in the development of reagent-less oxidoreductase-based biosensors. Biosensor responses are based on the detection of H2O2, generated in the enzymatic reaction, at 0.4 V. A simple and fast [...] Read more.
This work reports on the advantages of using carbon nanodots (CNDs) in the development of reagent-less oxidoreductase-based biosensors. Biosensor responses are based on the detection of H2O2, generated in the enzymatic reaction, at 0.4 V. A simple and fast method, consisting of direct adsorption of the bioconjugate, formed by mixing lactate oxidase, glucose oxidase, or uricase with CNDs, is employed to develop the nanostructured biosensors. Peripherical amide groups enriched CNDs are prepared from ethyleneglycol bis-(2-aminoethyl ether)-N,N,N′,N′-tetraacetic acid and tris(hydroxymethyl)aminomethane, and used as precursors. The bioconjugate formed between lactate oxidase and CNDs was chosen as a case study to determine the analytical parameters of the resulting L-lactate biosensor. A linear concentration range of 3.0 to 500 µM, a sensitivity of 4.98 × 10−3 µA·µM−1, and a detection limit of 0.9 µM were obtained for the L-lactate biosensing platform. The reproducibility of the biosensor was found to be 8.6%. The biosensor was applied to the L-lactate quantification in a commercial human serum sample. The standard addition method was employed. L-lactate concentration in the serum extract of 0.9 ± 0.3 mM (n = 3) was calculated. The result agrees well with the one obtained in 0.9 ± 0.2 mM, using a commercial spectrophotometric enzymatic kit. Full article
(This article belongs to the Special Issue Electrochemical Nanobiosensors)
Show Figures

Figure 1

Figure 1
<p>Cyclic voltammograms of SPAuE (black), Enz/SPAuE (red), Enz–CNDs/SPAuE (blue) when (<b>A</b>) LOx, (<b>B</b>) GOx, and (<b>C</b>) UOx were used, in the presence of 0.5 mM L-lactate, 1.0 mM glucose, and 1.0 mM uric acid (A, B, and C, respectively) and Enz–CNDs/SPAuE in the absence of substrate (grey). (<b>D</b>) Cyclic voltammograms of CNDs/SPAuE in 0.1 M phosphate buffer (pH 7.0) in the absence (black) and presence (red) of 0.5 mM H<sub>2</sub>O<sub>2</sub>. Scan rate: 0.01 V/s.</p>
Full article ">Figure 2
<p>UV-Visible absorption spectra of (<b>A</b>) LOx and (<b>C</b>) the bioconjugate LOx–CNDs, and fluorescence emission spectra (λ<sub>ex</sub>= 350 nm) of (<b>B</b>) LOx and (<b>D</b>) the bioconjugate LOx–CNDs (solid) and CNDs (dotted), in the absence (black) and in the presence (red) of 0.5 mM L-lactate in 0.1 M phosphate buffer (pH 7.0). Taken after 30 minutes L-lactate addition.</p>
Full article ">Figure 3
<p>Fluorescence micrographs of (<b>A</b>) CNDs, (<b>B</b>) LOx–CNDs bioconjugate, and (<b>C</b>) LOx, at 20× magnification.</p>
Full article ">Figure 4
<p>Tapping-mode AFM topographic images of a (<b>A</b>) LOx–CNDs bioconjugate, and (<b>B</b>) LOx. Inset in B shows a photograph of a dimer formed by the LOX. All the experiments were carried out on a gold plate.</p>
Full article ">Figure 5
<p>Nyquist diagrams in 0.1 M phosphate buffer (pH 7.0) containing 10 mM K<sub>3</sub>Fe(CN)<sub>6</sub>/10 mM K<sub>4</sub>Fe(CN)<sub>6</sub> for SPAuE (□), LOx/SPAuE (∆) and LOx–CNDs/SPAuE (○). Blue lines correspond to the fitting of the experimental data, to the shown equivalent circuit.</p>
Full article ">Figure 6
<p>Chronoamperometric biosensor response constructed from the bioconjugate LOx–CNDs in the presence of increasing L-lactate concentrations in 0.1 M phosphate buffer (pH 7.0). Inset: calibration curve.</p>
Full article ">Scheme 1
<p>Enzymatic process for the bioconjugate LOx–CNDs.</p>
Full article ">
20 pages, 870 KiB  
Article
Joint Optimization of Transmit Waveform and Receive Filter with Pulse-to-Pulse Waveform Variations for MIMO GMTI
by Zhoudan Lv, Feng He, Zaoyu Sun and Zhen Dong
Sensors 2019, 19(24), 5575; https://doi.org/10.3390/s19245575 - 17 Dec 2019
Viewed by 3152
Abstract
Multi-input multi-output (MIMO) is usually defined as a radar system in which the transmit time and receive time, space and transform domain can be separated into multiple independent signals. Given the bandwidth and power constraints of the radar system, MIMO radar can improve [...] Read more.
Multi-input multi-output (MIMO) is usually defined as a radar system in which the transmit time and receive time, space and transform domain can be separated into multiple independent signals. Given the bandwidth and power constraints of the radar system, MIMO radar can improve its performance by optimize design transmit waveforms and receive filters, so as to achieve better performance in suppressing clutter and noise. In this paper, we cyclicly optimize the transmit waveform and receive filters, so as to maximize the output signal interference and noise ratio (SINR). From fixed pulse-to-pulse waveform to pulse-to-pulse waveform variations, we discuss the joint optimization under energy constraint, then extend it to optimizations under constant-envelope constraint and similarity constraint. Compared to optimization with fixed pulse-to-pulse waveform, the generalized optimization achieves higher output SINR and lower minimum detectable velocity (MDV), further improve the suppressing performance. Full article
(This article belongs to the Special Issue Radar and Radiometric Sensors and Sensing)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of clutter unit division.</p>
Full article ">Figure 2
<p>Space-time cross-ambiguity of (<b>a</b>) Joint optimization with fixed pulse-to-pulse waveform under energy constraint, (<b>b</b>) Joint optimization with pulse-to-pulse waveform variations, (<b>c</b>) Joint optimization with pulse-to-pulse waveform variations with <math display="inline"><semantics> <mrow> <mn>6</mn> <mo>×</mo> <mn>6</mn> </mrow> </semantics></math> radar configuration.</p>
Full article ">Figure 3
<p>Comparison of the relationship between output signal interference and noise ratio (SINR) and iteration number.</p>
Full article ">Figure 4
<p>Comparison of minimum detectable velocity (MDV).</p>
Full article ">Figure 5
<p>Comparison of iteration number.</p>
Full article ">Figure 6
<p>Comparison of MDV.</p>
Full article ">Figure 7
<p>Comparison of iteration number.</p>
Full article ">Figure 8
<p>Comparison of MDV.</p>
Full article ">
19 pages, 11465 KiB  
Article
Polarimetric SAR Time-Series for Identification of Winter Land Use
by Julien Denize, Laurence Hubert-Moy and Eric Pottier
Sensors 2019, 19(24), 5574; https://doi.org/10.3390/s19245574 - 17 Dec 2019
Cited by 11 | Viewed by 3337
Abstract
In the past decade, high spatial resolution Synthetic Aperture Radar (SAR) sensors have provided information that contributed significantly to cropland monitoring. However, the specific configurations of SAR sensors (e.g., band frequency, polarization mode) used to identify land-use types remains underexplored. This study investigates [...] Read more.
In the past decade, high spatial resolution Synthetic Aperture Radar (SAR) sensors have provided information that contributed significantly to cropland monitoring. However, the specific configurations of SAR sensors (e.g., band frequency, polarization mode) used to identify land-use types remains underexplored. This study investigates the contribution of C/L-Band frequency, dual/quad polarization and the density of image time-series to winter land-use identification in an agricultural area of approximately 130 km² located in northwestern France. First, SAR parameters were derived from RADARSAT-2, Sentinel-1 and Advanced Land Observing Satellite 2 (ALOS-2) time-series, and one quad-pol and six dual-pol datasets with different spatial resolutions and densities were calculated. Then, land use was classified using the Random Forest algorithm with each of these seven SAR datasets to determine the most suitable SAR configuration for identifying winter land-use. Results highlighted that (i) the C-Band (F1-score 0.70) outperformed the L-Band (F1-score 0.57), (ii) quad polarization (F1-score 0.69) outperformed dual polarization (F1-score 0.59) and (iii) a dense Sentinel-1 time-series (F1-score 0.70) outperformed RADARSAT-2 and ALOS-2 time-series (F1-score 0.69 and 0.29, respectively). In addition, Shannon Entropy and SPAN were the SAR parameters most important for discriminating winter land-use. Thus, the results of this study emphasize the interest of using Sentinel-1 time-series data for identifying winter land-use. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>Study site location, ground surveys (RGB composite image constructed from Shannon Entropy extracted from Advanced Land Observing Satellite 2 (ALOS-2) data for three dates: 03-09-2017, 04-15-2017 and 05-13-2017. ©Kalidéos data 2017 and JAXA data).</p>
Full article ">Figure 2
<p>The mainland-use types encountered in winter in the study area: (<b>a</b>) winter crops (winter barley), (<b>b</b>) catch crops (mustard), (<b>c</b>) grasslands and (<b>d</b>) crop residues (maize stalks).</p>
Full article ">Figure 3
<p>Importance (in %) of quad-pol SAR parameters based on 100 random forest classifications. Parameters related to backscattering coefficients are in black, while polarimetric parameters are in gray. SE: Shannon Entropy.</p>
Full article ">Figure 4
<p>Importance (in %) of dual-pol SAR parameters based on 100 random forest classifications using (<b>A</b>) ALOS-2 parameters, (<b>B</b>) RADARSAT-2 parameters and (<b>C</b>) Sentinel-1 parameters. Parameters related to backscattering coefficients are in black, while polarimetric parameters are in gray. SE: Shannon Entropy.</p>
Full article ">Figure 5
<p>Comparison of classification accuracy of each land-use class between dual and quad polarization (pol) modes. Box-and-whisker plots represent the variation in random forest classification accuracy based on 100 iterations. Whiskers indicate 1.5 times the interquartile range.</p>
Full article ">Figure 6
<p>Comparison of classification accuracy of each land-use class among band frequencies. Box-and-whisker plots represent the variation in random forest classification accuracy based on 100 iterations. Whiskers indicate 1.5 times the interquartile range.</p>
Full article ">Figure 7
<p>Comparison of classification accuracy of each land-use class between sparse and dense Sentinel-1 time-series. Box-and-whisker plots represent the variation in random forest classification accuracy based on 100 iterations. Whiskers indicate 1.5 times the interquartile range.</p>
Full article ">Figure 8
<p>Comparison of classification accuracy of each land-use class among SAR sensors. Box-and-whisker plots represent the variation in RF classification accuracy based on 100 iterations. Whiskers indicate 1.5 times the interquartile range.</p>
Full article ">Figure 9
<p>Map of winter land-use classes obtained using a parameter dataset derived from the Sentinel-1 dense time-series. Classification was performed using the random forest algorithm.</p>
Full article ">
16 pages, 12155 KiB  
Article
Application of Fuzzy Logic for Selection of Actor Nodes in WSANs —Implementation of Two Fuzzy-Based Systems and a Testbed
by Donald Elmazi, Miralda Cuka, Makoto Ikeda, Keita Matsuo and Leonard Barolli
Sensors 2019, 19(24), 5573; https://doi.org/10.3390/s19245573 - 17 Dec 2019
Cited by 4 | Viewed by 3289
Abstract
The development of sensor networks and the importance of smart devices in the physical world has brought attention to Wireless Sensor and Actor Networks (WSANs). They consist of a large number of static sensors and also a few other smart devices, such as [...] Read more.
The development of sensor networks and the importance of smart devices in the physical world has brought attention to Wireless Sensor and Actor Networks (WSANs). They consist of a large number of static sensors and also a few other smart devices, such as different types of robots. Sensor nodes have responsibility for sensing and sending information towards an actor node any time there is an event that needs immediate intervention such as natural disasters or malicious attacks in the network. The actor node is responsible for processing and taking prompt action accordingly. But in order to select an appropriate actor to do one task, we need to consider different parameters, which make the problem NP-hard. For this reason, we consider Fuzzy Logic and propose two Fuzzy Based Simulation Systems (FBSS). FBSS1 has three input parameters such as Number of Sensors per Actor (NSA), Remaining Energy (RE) and Distance to Event (DE). On the other hand, FBSS2 has one new parameter—Transmission Range (TR)—and for this reason it is more complex. We will explain in detail the differences between these two systems. We also implement a testbed and compare simulation results with experimental results. Full article
(This article belongs to the Special Issue Mobile Sensing: Platforms, Technologies and Challenges)
Show Figures

Figure 1

Figure 1
<p>Wireless Sensor Actor Networks (WSANs).</p>
Full article ">Figure 2
<p>Architectures of WSANs. (<b>a</b>) Fully-Automated. (<b>b</b>) Semi-Automated.</p>
Full article ">Figure 3
<p>Proposed System.</p>
Full article ">Figure 4
<p>Fuzzy Logic Controller.</p>
Full article ">Figure 5
<p>Membership functions types.</p>
Full article ">Figure 6
<p>Fuzzy membership functions. (<b>a</b>) Transmission Range. (<b>b</b>) Number of Sensors per Actor. (<b>c</b>) Remaining Energy. (<b>d</b>) Distance to Event. (<b>e</b>) Actor Selection Decision.</p>
Full article ">Figure 7
<p>A distance measuring sensor.</p>
Full article ">Figure 8
<p>Workstation.</p>
Full article ">Figure 9
<p>Results for different values of Transmission Range (TR). (<b>a</b>) TR = 0.1. (<b>b</b>) TR = 0.5. (<b>c</b>) TR = 0.9.</p>
Full article ">Figure 10
<p>Results for TR = 0.1. (<b>a</b>) DE = 0.1. (<b>b</b>) DE = 0.5. (<b>c</b>) DE = 0.9.</p>
Full article ">Figure 11
<p>Results for TR = 0.5. (<b>a</b>) DE = 0.1. (<b>b</b>) DE = 0.5. (<b>c</b>) DE = 0.9.</p>
Full article ">Figure 12
<p>Results for TR = 0.9. (<b>a</b>) DE = 0.1. (<b>b</b>) DE = 0.5. (<b>c</b>) DE = 0.9</p>
Full article ">Figure 13
<p>Experimental results for different linguistic variables of TR. (<b>a</b>) TR = Short. (<b>b</b>) TR = Middle. (<b>c</b>) TR = Long.</p>
Full article ">Figure 14
<p>Experimental results for different Events.</p>
Full article ">
10 pages, 4479 KiB  
Article
A 120-ke Full-Well Capacity 160-µV/e Conversion Gain 2.8-µm Backside-Illuminated Pixel with a Lateral Overflow Integration Capacitor
by Isao Takayanagi, Ken Miyauchi, Shunsuke Okura, Kazuya Mori, Junichi Nakamura and Shigetoshi Sugawa
Sensors 2019, 19(24), 5572; https://doi.org/10.3390/s19245572 - 17 Dec 2019
Cited by 13 | Viewed by 7522
Abstract
In this paper, a prototype complementary metal-oxide-semiconductor (CMOS) image sensor with a 2.8-μm backside-illuminated (BSI) pixel with a lateral overflow integration capacitor (LOFIC) architecture is presented. The pixel was capable of a high conversion gain readout with 160 μV/e for low light [...] Read more.
In this paper, a prototype complementary metal-oxide-semiconductor (CMOS) image sensor with a 2.8-μm backside-illuminated (BSI) pixel with a lateral overflow integration capacitor (LOFIC) architecture is presented. The pixel was capable of a high conversion gain readout with 160 μV/e for low light signals while a large full-well capacity of 120 ke was obtained for high light signals. The combination of LOFIC and the BSI technology allowed for high optical performance without degradation caused by extra devices for the LOFIC structure. The sensor realized a 70% peak quantum efficiency with a normal (no anti-reflection coating) cover glass and a 91% angular response at ±20° incident light. This 2.8-μm pixel is potentially capable of higher than 100 dB dynamic range imaging in a pure single exposure operation. Full article
Show Figures

Figure 1

Figure 1
<p>Pixel circuit schematic of a lateral overflow integration capacitor (LOFIC) pixel.</p>
Full article ">Figure 2
<p>(<b>a</b>) Pixel operation timing of LOFIC CISs. (<b>b</b>) Potential diagram of LOFIC CISs. There are two charge transfer processes, high conversion gain (HCG) charge transfer and log conversion gain (LCG) charge transfer. SEL, SG, R and TG are gate control pulse. RST and SIG denote sample-and-hold timings for the reset level and the signal revel, respectively.</p>
Full article ">Figure 3
<p>Schematic cross-sectional view of (<b>a</b>) a conventional FSI LOFIC pixel and (<b>b</b>) the proposed backside-illuminated (BSI) LOFIC pixel. CS_BTM is bottom electrode of the capacitor CS.</p>
Full article ">Figure 4
<p>Schematic layout configuration of the 2.8-μm BSI LOFIC pixel. PDN and S/D show photodiode N implantation and source/drain implantation for transistors, respectively.</p>
Full article ">Figure 5
<p>Potential distribution.</p>
Full article ">Figure 6
<p>Chip photograph. Assembled in a 64 pin PLCC package.</p>
Full article ">Figure 7
<p>Photo conversion characteristics measured in the HCG and LCG modes. FWC: full-well capacity.</p>
Full article ">Figure 8
<p>Quantum efficiency measured in HCG and LCG modes.</p>
Full article ">Figure 9
<p>Angular response measured in HCG and LCG modes. (<b>a</b>) Vertically rotated; (<b>b</b>) Horizontally rotated.</p>
Full article ">Figure 10
<p>Gr/Gb ratio. (<b>a</b>) Vertically rotated; (<b>b</b>) Horizontally rotated.</p>
Full article ">Figure 11
<p>Floating diffusion node (FD) dark current histogram at 60 °C comparing between the shallow-trench isolation (STI) and PN-junction isolation.</p>
Full article ">Figure 12
<p>Sample images captured by HCG, LCG, and single-exposure high dynamic range (SEHDR) mode.</p>
Full article ">
9 pages, 450 KiB  
Article
Exploring Risk of Falls and Dynamic Unbalance in Cerebellar Ataxia by Inertial Sensor Assessment
by Pietro Caliandro, Carmela Conte, Chiara Iacovelli, Antonella Tatarelli, Stefano Filippo Castiglia, Giuseppe Reale and Mariano Serrao
Sensors 2019, 19(24), 5571; https://doi.org/10.3390/s19245571 - 17 Dec 2019
Cited by 23 | Viewed by 3731
Abstract
Background. Patients suffering from cerebellar ataxia have extremely variable gait kinematic features. We investigated whether and how wearable inertial sensors can describe the gait kinematic features among ataxic patients. Methods. We enrolled 17 patients and 16 matched control subjects. We acquired data by [...] Read more.
Background. Patients suffering from cerebellar ataxia have extremely variable gait kinematic features. We investigated whether and how wearable inertial sensors can describe the gait kinematic features among ataxic patients. Methods. We enrolled 17 patients and 16 matched control subjects. We acquired data by means of an inertial sensor attached to an ergonomic belt around pelvis, which was connected to a portable computer via Bluetooth. Recordings of all the patients were obtained during overground walking. From the accelerometric data, we obtained the harmonic ratio (HR), i.e., a measure of the acceleration patterns, smoothness and rhythm, and the step length coefficient of variation (CV), which evaluates the variability of the gait cycle. Results. Compared to controls, patients had a lower HR, meaning a less harmonic and rhythmic acceleration pattern of the trunk, and a higher step length CV, indicating a more variable step length. Both HR and step length CV showed a high effect size in distinguishing patients and controls (p < 0.001 and p = 0.011, respectively). A positive correlation was found between the step length CV and both the number of falls (R = 0.672; p = 0.003) and the clinical severity (ICARS: R = 0.494; p = 0.044; SARA: R = 0.680; p = 0.003). Conclusion. These findings demonstrate that the use of inertial sensors is effective in evaluating gait and balance impairment among ataxic patients. Full article
Show Figures

Figure 1

Figure 1
<p>Correlations between the maximum step-to-step coefficient of variation and the falls/year, ICARS-total, and SARA-total scores in 17 ataxic patients. Pearson’s R coefficient (R) and significance (p) are reported.</p>
Full article ">
12 pages, 3996 KiB  
Article
Multi-Site Photoplethysmographic and Electrocardiographic System for Arterial Stiffness and Cardiovascular Status Assessment
by David Perpetuini, Antonio Maria Chiarelli, Lidia Maddiona, Sergio Rinella, Francesco Bianco, Valentina Bucciarelli, Sabina Gallina, Vincenzo Perciavalle, Vincenzo Vinciguerra, Arcangelo Merla and Giorgio Fallica
Sensors 2019, 19(24), 5570; https://doi.org/10.3390/s19245570 - 17 Dec 2019
Cited by 24 | Viewed by 4605
Abstract
The development and validation of a system for multi-site photoplethysmography (PPG) and electrocardiography (ECG) is presented. The system could acquire signals from 8 PPG probes and 10 ECG leads. Each PPG probe was constituted of a light-emitting diode (LED) source at a wavelength [...] Read more.
The development and validation of a system for multi-site photoplethysmography (PPG) and electrocardiography (ECG) is presented. The system could acquire signals from 8 PPG probes and 10 ECG leads. Each PPG probe was constituted of a light-emitting diode (LED) source at a wavelength of 940 nm and a silicon photomultiplier (SiPM) detector, located in a back-reflection recording configuration. In order to ensure proper optode-to-skin coupling, the probe was equipped with insufflating cuffs. The high number of PPG probes allowed us to simultaneously acquire signals from multiple body locations. The ECG provided a reference for single-pulse PPG evaluation and averaging, allowing the extraction of indices of cardiovascular status with a high signal-to-noise ratio. Firstly, the system was characterized on optical phantoms. Furthermore, in vivo validation was performed by estimating the brachial-ankle pulse wave velocity (baPWV), a metric associated with cardiovascular status. The validation was performed on healthy volunteers to assess the baPWV intra- and extra-operator repeatability and its association with age. Finally, the baPWV, evaluated via the developed instrumentation, was compared to that estimated with a commercial system used in clinical practice (Enverdis Vascular Explorer). The validation demonstrated the system’s reliability and its effectiveness in assessing the cardiovascular status in arterial ageing. Full article
(This article belongs to the Special Issue Flexible and Stretchable Electronic Sensors)
Show Figures

Figure 1

Figure 1
<p>Photoplethysmography (PPG) optical probe, manufactured at STMicroelectronics (Catania, Italy), that was used in the developed system. The probe worked in a back-reflection recording modality. (<b>a</b>) Each probe was made of a light-emitting diode (LED) and a silicon photo multiplier (SiPM) mounted on the same board. (<b>b</b>) The SiPM and LED board were inserted in bracelets equipped with pressurized cuffs delivering a pressure below that of diastole (~60 mmHg).</p>
Full article ">Figure 2
<p>(<b>a</b>) Printed circuit board (PCB) developed to interface PPG probes and electrocardiography (ECG) electrodes with the analog input module. (<b>b</b>) NI PXIe4303 (National Instruments, Austin, TX, USA) analog input module that was used in the system. (<b>c</b>) LabVIEW graphical user interface (GUI) program that acquired and allowed the visualization of PPG and ECG signals.</p>
Full article ">Figure 3
<p>Schematic representation of the acquirable ECG and PPG locations reported on a body template.</p>
Full article ">Figure 4
<p>ECG and PPG preprocessing chain used for data analysis together with an example of an extracted PPG average pulse and associated standard error.</p>
Full article ">Figure 5
<p>(<b>a</b>) Example of in vivo measurement using Vascular Explorer (VE) and (<b>b</b>) example of ECG–PPG signal acquisition using the developed system.</p>
Full article ">Figure 6
<p>Example of cross-talk results for three randomly selected PPG probes (<b>a</b>) in the time and (<b>b</b>) in the frequency domain. Signals are reported for the probe subject to motion (grey), for the control measurements on an optical phantom (red) and on the ulnar artery of a subject (black).</p>
Full article ">Figure 7
<p>Example of single-pulse averaged PPG signals with associated standard error acquired concurrently in six body locations from two indicative participants.</p>
Full article ">Figure 8
<p>Intra- and extra-operator average left and right brachial-ankle pulse wave velocity (baPWV) repeatability analysis. (<b>a</b>) Intra-operator correlation and (<b>b</b>) associated Bland–Altman plot; (<b>c</b>) extra-operator correlation and (<b>d</b>) associated Bland–Altman plot.</p>
Full article ">Figure 9
<p>Correlation plot between age and baPWV for a cohort of healthy participants for (<b>a</b>) the right side and (<b>b</b>) the left side of the body.</p>
Full article ">Figure 10
<p>Comparison between VE and the ECG–PPG developed system in estimating baPWV. (<b>a</b>) Correlation and (<b>b</b>) associated Bland–Altman plot of baPWVs evaluated through the two instrumentations for the right side of the body; (<b>c</b>) correlation and (<b>d</b>) associated Bland–Altman plot of baPWVs evaluated through the two instrumentations for the left side of the body.</p>
Full article ">
13 pages, 3443 KiB  
Article
Fall Detection Using Multiple Bioradars and Convolutional Neural Networks
by Lesya Anishchenko, Andrey Zhuravlev and Margarita Chizh
Sensors 2019, 19(24), 5569; https://doi.org/10.3390/s19245569 - 17 Dec 2019
Cited by 36 | Viewed by 4345
Abstract
A lack of effective non-contact methods for automatic fall detection, which may result in the development of health and life-threatening conditions, is a great problem of modern medicine, and in particular, geriatrics. The purpose of the present work was to investigate the advantages [...] Read more.
A lack of effective non-contact methods for automatic fall detection, which may result in the development of health and life-threatening conditions, is a great problem of modern medicine, and in particular, geriatrics. The purpose of the present work was to investigate the advantages of utilizing a multi-bioradar system in the accuracy of remote fall detection. The proposed concept combined usage of wavelet transform and deep learning to detect fall episodes. The continuous wavelet transform was used to get a time-frequency representation of the bio-radar signal and use it as input data for a pre-trained convolutional neural network AlexNet adapted to solve the problem of detecting falls. Processing of the experimental results showed that the designed multi-bioradar system can be used as a simple and view-independent approach implementing a non-contact fall detection method with an accuracy and F1-score of 99%. Full article
(This article belongs to the Special Issue Electromagnetic Sensors for Biomedical Applications)
Show Figures

Figure 1

Figure 1
<p>Scheme of the bioradar.</p>
Full article ">Figure 2
<p>Bioradar prototype photos: (<b>a</b>) bioradar assembly; (<b>b</b>) housing panels removed.</p>
Full article ">Figure 3
<p>Designed shield for Arduino UNO board.</p>
Full article ">Figure 4
<p>Scheme of the bioradar experiment.</p>
Full article ">Figure 5
<p>The raw bioradar signals of a human fall occurred at 6.1 s for frontal (<b>upper panel</b>) and lateral (<b>lower panel</b>) oriented bioradars.</p>
Full article ">Figure 6
<p>The raw bioradar signals without fall episodes for frontal (<b>upper panel</b>) and lateral (<b>lower panel</b>) oriented bioradars.</p>
Full article ">Figure 7
<p>The filtered data of human fall occurred at 6.1 s for frontal (<b>upper panel</b>) and lateral (<b>lower panel</b>) oriented bioradars.</p>
Full article ">Figure 8
<p>Scalograms of filtered signals for frontal-oriented bioradar: (<b>a</b>) with human fall occurring at 6.1 s; (<b>b</b>) without fall.</p>
Full article ">Figure 9
<p>CNN architecture.</p>
Full article ">Figure 10
<p>Flowchart for multi-bioradar system data classification.</p>
Full article ">
17 pages, 2847 KiB  
Article
Finite-Time Attitude Stabilization Adaptive Control for Spacecraft with Actuator Dynamics
by Chunbao Wang, Dong Ye, Zhongcheng Mu, Zhaowei Sun and Shufan Wu
Sensors 2019, 19(24), 5568; https://doi.org/10.3390/s19245568 - 16 Dec 2019
Cited by 3 | Viewed by 2877
Abstract
For the attitude stabilization of spacecraft with actuator dynamics, this paper proposed a finite-time control law. Firstly, the dynamic property of the actuator is analyzed by an example. Then, a basic control law is derived to achieve the finite-time stability using the double [...] Read more.
For the attitude stabilization of spacecraft with actuator dynamics, this paper proposed a finite-time control law. Firstly, the dynamic property of the actuator is analyzed by an example. Then, a basic control law is derived to achieve the finite-time stability using the double fast terminal sliding mode manifold. When there is no prior knowledge of time matrix of the actuator, an adaptive law is proposed to estimate the unknown information. An adaptive control law is derived to guarantee the finite-time convergence of the attitude, and a Lyapunov-based analysis is provided. Finally, simulations are carried out to demonstrate the effectiveness of the proposed control law to the attitude stabilization with the actuator dynamics. The results show that the high-precision attitude control performance can be achieved by the proposed scheme. Full article
(This article belongs to the Section State-of-the-Art Sensors Technologies)
Show Figures

Figure 1

Figure 1
<p>Actuator response to desired control torque.</p>
Full article ">Figure 2
<p>Structure of basic double fast terminal sliding mode (FTSM) control system.</p>
Full article ">Figure 3
<p>Structure of adaptive double FTSM control system.</p>
Full article ">Figure 4
<p>Error of attitude quaternion.</p>
Full article ">Figure 5
<p>Error of angular velocity.</p>
Full article ">Figure 6
<p>Actual torque of actuator.</p>
Full article ">Figure 7
<p>Desired torque of actuator.</p>
Full article ">Figure 8
<p>Torque difference.</p>
Full article ">Figure 9
<p>Error of attitude quaternion.</p>
Full article ">Figure 10
<p>Error of angular velocity.</p>
Full article ">Figure 11
<p>Actual torque of actuator.</p>
Full article ">Figure 12
<p>Desired torque of actuator.</p>
Full article ">Figure 13
<p>Torque difference.</p>
Full article ">Figure 14
<p>Estimation of time matrix.</p>
Full article ">
22 pages, 4805 KiB  
Article
An Underwater Image Enhancement Method for Different Illumination Conditions Based on Color Tone Correction and Fusion-Based Descattering
by Yidan Liu, Huiping Xu, Dinghui Shang, Chen Li and Xiangqian Quan
Sensors 2019, 19(24), 5567; https://doi.org/10.3390/s19245567 - 16 Dec 2019
Cited by 20 | Viewed by 5412
Abstract
In the shallow-water environment, underwater images often present problems like color deviation and low contrast due to light absorption and scattering in the water body, but for deep-sea images, additional problems like uneven brightness and regional color shift can also exist, due to [...] Read more.
In the shallow-water environment, underwater images often present problems like color deviation and low contrast due to light absorption and scattering in the water body, but for deep-sea images, additional problems like uneven brightness and regional color shift can also exist, due to the use of chromatic and inhomogeneous artificial lighting devices. Since the latter situation is rarely studied in the field of underwater image enhancement, we propose a new model to include it in the analysis of underwater image degradation. Based on the theoretical study of the new model, a comprehensive method for enhancing underwater images under different illumination conditions is proposed in this paper. The proposed method is composed of two modules: color-tone correction and fusion-based descattering. In the first module, the regional or full-extent color deviation caused by different types of incident light is corrected via frequency-based color-tone estimation. And in the second module, the residual low contrast and pixel-wise color shift problems are handled by combining the descattering results under the assumption of different states of the image. The proposed method is experimented on laboratory and open-water images of different depths and illumination states. Qualitative and quantitative evaluation results demonstrate that the proposed method outperforms many other methods in enhancing the quality of different types of underwater images, and is especially effective in improving the color accuracy and information content in badly-illuminated regions of underwater images with non-uniform illumination, such as deep-sea images. Full article
(This article belongs to the Special Issue Imaging Sensor Systems for Analyzing Subsea Environment and Life)
Show Figures

Figure 1

Figure 1
<p>Underwater optical imaging in shallow water and deep sea.</p>
Full article ">Figure 2
<p>The framework of proposed method.</p>
Full article ">Figure 3
<p>(<b>a</b>) A close-up underwater image captured in a water-filled pool. (<b>b</b>) Estimated color tone of (<b>a</b>). (<b>c</b>) The average Fourier frequency of (<b>a</b>). The black box in it locates the maximum frequency. (<b>d</b>) Color-tone subtracted result of (<b>a</b>). (<b>e</b>) Brightness-adjusted result of (<b>d</b>).</p>
Full article ">Figure 4
<p>(<b>a</b>) The original underwater images from [<a href="#B26-sensors-19-05567" class="html-bibr">26</a>]. (<b>b</b>) Estimated color tone. (<b>c</b>) Color-tone subtracted result. (<b>d</b>) Final result of color-tone correction.</p>
Full article ">Figure 5
<p>(<b>a</b>) An underwater image that was captured in deep sea with only inhomogeneous artificial illumination. (<b>b</b>) A raw color tone estimation of (<b>a</b>) obtained by applying spatial Gaussian filter. (<b>c</b>) Segmenting (<b>a</b>) into regions with nearly-uniform illumination by K-means. (<b>d</b>) A combination of estimated color tones of all regions. (<b>e</b>) Ultimate color-tone image obtained by applying spatial Gaussian filter and guided filter to (d). (<b>f</b>) Color-tone corrected result of (<b>a</b>) by using the color tone in (<b>e</b>). (<b>g</b>) Estimated color-tone image from (<b>a</b>) by applying color-tone estimation method for uniformly-illuminated images. (<b>h</b>) Color-tone corrected result of (<b>a</b>) by using the color tone in (<b>g</b>).</p>
Full article ">Figure 6
<p>A brief workflow of the fusion-based descattering module.</p>
Full article ">Figure 7
<p>Visual comparison of different methods on enhancing uniformly-illuminated underwater images from the BV dataset.</p>
Full article ">Figure 8
<p>Visual comparison of different methods on enhancing non-uniformly-illuminated underwater images from the EP dataset.</p>
Full article ">Figure 9
<p>(<b>a</b>) The locations of color boards in EP images. (<b>b</b>) The procedure of preparing the color board region for calculating the color difference against the ground truth image.</p>
Full article ">Figure 10
<p>Visual comparison of different methods on enhancing non-uniformly-illuminated underwater images from the DS dataset.</p>
Full article ">
17 pages, 7651 KiB  
Article
Improved Drought Monitoring Index Using GNSS-Derived Precipitable Water Vapor over the Loess Plateau Area
by Qingzhi Zhao, Xiongwei Ma, Wanqiang Yao, Yang Liu, Zheng Du, Pengfei Yang and Yibin Yao
Sensors 2019, 19(24), 5566; https://doi.org/10.3390/s19245566 - 16 Dec 2019
Cited by 20 | Viewed by 7003
Abstract
Standardized precipitation evapotranspiration index (SPEI) is an acknowledged drought monitoring index, and the evapotranspiration (ET) used to calculated SPEI is obtained based on the Thornthwaite (TH) model. However, the SPEI calculated based on the TH model is overestimated globally, whereas the more accurate [...] Read more.
Standardized precipitation evapotranspiration index (SPEI) is an acknowledged drought monitoring index, and the evapotranspiration (ET) used to calculated SPEI is obtained based on the Thornthwaite (TH) model. However, the SPEI calculated based on the TH model is overestimated globally, whereas the more accurate ET derived from the Penman–Monteith (PM) model recommended by the Food and Agriculture Organization of the United Nations is unavailable due to the lack of a large amount of meteorological data at most places. Therefore, how to improve the accuracy of ET calculated by the TH model becomes the focus of this study. Here, a revised TH (RTH) model is proposed using the temperature (T) and precipitable water vapor (PWV) data. The T and PWV data are derived from the reanalysis data and the global navigation satellite system (GNSS) observation, respectively. The initial value of ET for the RTH model is calculated based on the TH model, and the time series of ET residual between the TH and PM models is then obtained. Analyzed results reveal that ET residual is highly correlated with PWV and T, and the correlate coefficient between PWV and ET is −0.66, while that between T and ET for cases of T larger or less than 0 °C are −0.54 and 0.59, respectively. Therefore, a linear model between ET residual and PWV/T is established, and the ET value of the RTH model can be obtained by combining the TH-derived ET and estimated ET residual. Finally, the SPEI calculated based on the RTH model can be obtained and compared with that derived using PM and TH models. Result in the Loess Plateau (LP) region reveals the good performance of the RTH-based SPEI when compared with the TH-based SPEI over the period of 1979–2016. A case analysis in April 2013 over the LP region also indicates the superiority of the RTH-based SPEI at 88 meteorological and 31 GNSS stations when the PM-based SPEI is considered as the reference. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>Geographic distribution of selected global navigation satellite system (GNSS) and meteorological stations over the Loess Plateau (LP) region.</p>
Full article ">Figure 2
<p>Interpolated time series of precipitable water vapor (PWV) at XNIN Station over the period of 1999–2015.</p>
Full article ">Figure 3
<p>Average evapotranspiration (ET) values at 88 meteorological stations calculated based on Penman–Monteith (PM) and Thornthwaite (TH) models over the period of 1979–2016.</p>
Full article ">Figure 4
<p>Relationships between ET residual and (<b>a</b>) precipitable water vapor (PWV) / (<b>b</b>) temperature (T) over the LP region, respectively.</p>
Full article ">Figure 5
<p>Comparisons of root mean square (RMS) and mean absolute error (MAE) of ET residual between TH and a revised Thornthwaite (RTH) models at 88 stations over the period of 2015–2016 when the ET derived from PM model is regarded as reference.</p>
Full article ">Figure 6
<p>Average RMS improvement rate of RTH model compared with the TH model in the LP region over the period of 2015–2016.</p>
Full article ">Figure 7
<p>Scatter plot of monthly ET values calculated by TH, RTH, and PM models established in the LP region over the period of 2015–2016.</p>
Full article ">Figure 8
<p>Long-term time series of average ET calculated using different methods in the LP region over the period of 1979–2016.</p>
Full article ">Figure 9
<p>Long-term time series of average difference between RTH–PM- and TH–PM-based SPEI under multi-month scales at 88 meteorological stations over the period of 1979–2016.</p>
Full article ">Figure 10
<p>Pearson’s correlations of TH–PM- and RTH–PM-based SPEI under different multi-month scales.</p>
Full article ">Figure 11
<p>RMS comparison of SPEI difference between TH–PM and RTH–PM at each meteorological station over the period of 1979–2016, where the left/right squares at each station refer to the RMS derived from TH- and RTH-based SPEI, respectively.</p>
Full article ">Figure 12
<p>Average RMS improvement rate of RTH-based SPEI compared with TH-based SPEI in the LP region under different month scales.</p>
Full article ">Figure 13
<p>Scatter plots of temperature and precipitation in the LP region over the period of 1979–2016.</p>
Full article ">Figure 14
<p>Comparison of TH- and RTH-based SPEI at XNIN Station over the period of 1999–2014 under different month scales.</p>
Full article ">Figure 15
<p>Comparison of SPEI calculated using different models at GNSS and meteorological stations in the LP region in April 2013 under multi-month scales, where the first, second, and third columns are the SPEI calculated based on TH, RTH, and PM models under different month scales.</p>
Full article ">
11 pages, 6388 KiB  
Article
Quartz Tuning Fork Resonance Tracking and application in Quartz Enhanced Photoacoustics Spectroscopy
by Roman Rousseau, Nicolas Maurin, Wioletta Trzpil, Michael Bahriz and Aurore Vicet
Sensors 2019, 19(24), 5565; https://doi.org/10.3390/s19245565 - 16 Dec 2019
Cited by 19 | Viewed by 3961
Abstract
The quartz tuning fork (QTF) is a piezoelectric transducer with a high quality factor that was successfully employed in sensitive applications such as atomic force microscopy or Quartz-Enhanced Photo-Acoustic Spectroscopy (QEPAS). The variability of the environment (temperature, humidity) can lead to a drift [...] Read more.
The quartz tuning fork (QTF) is a piezoelectric transducer with a high quality factor that was successfully employed in sensitive applications such as atomic force microscopy or Quartz-Enhanced Photo-Acoustic Spectroscopy (QEPAS). The variability of the environment (temperature, humidity) can lead to a drift of the QTF resonance. In most applications, regular QTF calibration is absolutely essential. Because the requirements vary greatly depending on the field of application, different characterization methods can be found in the literature. We present a review of these methods and compare them in terms of accuracy. Then, we further detail one technique, called Beat Frequency analysis, based on the transient response followed by heterodyning. This method proved to be fast and accurate. Further, we demonstrate the resonance tracking of the QTF while changing the temperature and the humidity. Finally, we integrate this characterization method in our Resonance Tracking (RT) QEPAS sensor and show the significant reduction of the signal drift compared to a conventional QEPAS sensor. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>The same QTF is characterized with a frequency sweep (<span class="html-italic">f</span><sub>0</sub> = 32,750.0 Hz, Q = 9783) (<b>a</b>), the transient analysis (Q = 10,031) (<b>b</b>) and the BF analysis (<span class="html-italic">f</span><sub>0</sub> = 32,750.3 Hz, Q = 9263) (<b>c</b>). The experimental is fitted with the electrical model for (<b>a</b>) and with an exponentially decaying sinusoid for (<b>c</b>). The signal envelop is extracted in (<b>b</b>) in order to get a value of Q.</p>
Full article ">Figure 2
<p>Observation of typical signals in BF analysis: the excitation signal (<b>a</b>), the output of the transimpedance amplifier (<b>b</b>) and the output of the lock-in amplifier (<b>c</b>). The excitation signal provides the initial energy to the QTF with a sinusoid at a frequency <span class="html-italic">f</span><sub>exc</sub> during a time <span class="html-italic">t</span><sub>exc</sub>. The QTF is forced to oscillate at <span class="html-italic">f</span><sub>exc</sub>, leading to a continuous demodulated signal on the lock-in amplifier (LIA) output. When the excitation signal drops to zero, the QTF returns to its natural frequency <span class="html-italic">f</span><sub>0</sub>, resulting in a Beat Frequency (BF) signal on the LIA.</p>
Full article ">Figure 3
<p>Setup for the BF measurement. The QTF is enclosed in a temperature and humidity regulated chamber. The relay, controlled by the analog output of a DAQ card, is used to switch between the excitation source and the ground. The QTF current is amplified and then demodulated by a lock-in amplifier. A labview program on a laptop synchronizes the instruments to obtain the BF signal, from which are extracted the QTF parameters in real time.</p>
Full article ">Figure 4
<p>(<b>a</b>) Resonant frequency of a vacuum capped (black) and an open QTF (red) as a function of temperature. The inset (bottom right) represents the resonant frequency versus the time for two temperature steps. (<b>b</b>) Recording the QTF parameters <span class="html-italic">f</span><sub>0</sub> and Q while varying the humidity and keeping the temperature constant. The humidity cycle is made of 10% RH steps: 7 ascending steps (30 to 90% RH) and 2 descending steps (50% and 30% RH).</p>
Full article ">Figure 5
<p>(<b>a</b>) Evolution of the resonant frequency (continuous line) and the quality factor (dotted line) and (<b>b</b>) response of the gas sensor to an injection of 1% dry CH<sub>4</sub> and 1% wet CH<sub>4</sub>, with the QEPAS (green) and the RT-QEPAS technique. In RT-QEPAS, the QTF instantaneous frequency <span class="html-italic">f</span><sub>0</sub> is used as a feedback for the laser modulation frequency (blue), and then normalized by the Q factor (red). The RT-QEPAS sensor is thus more robust to environment changes than the conventional QEPAS sensor. The gas cell is flushed with pure nitrogen between the two injections.</p>
Full article ">
16 pages, 7109 KiB  
Article
Comparison of Various Frequency Matching Schemes for Geometric Correction of Geostationary Ocean Color Imager
by Jong-Hwan Son, Han-Gyeol Kim, Hee-Jeong Han and Taejung Kim
Sensors 2019, 19(24), 5564; https://doi.org/10.3390/s19245564 - 16 Dec 2019
Cited by 2 | Viewed by 2717
Abstract
Current precise geometric correction of Geostationary Ocean Color Imager (GOCI) image slots is performed by shoreline matching. However, it is troublesome to handle slots with few or no shorelines, or slots covered by clouds. Geometric correction by frequency matching has been proposed to [...] Read more.
Current precise geometric correction of Geostationary Ocean Color Imager (GOCI) image slots is performed by shoreline matching. However, it is troublesome to handle slots with few or no shorelines, or slots covered by clouds. Geometric correction by frequency matching has been proposed to handle these slots. In this paper, we further extend previous research on frequency matching by comparing the performance of three frequency domain matching methods: phase correlation, gradient correlation, and orientation correlation. We compared the performance of each matching technique in terms of match success rate and geometric accuracy. We concluded that the three frequency domain matching method with peak search range limits was comparable to geometric correction performance with shoreline matching. The proposed method handles translation only, and assumes that rotation has been corrected. We need to do further work on how to handle rotation by frequency matching. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>Geostationary Ocean Color Imager (GOCI) slot arrangement and imaging sequence (Reprinted from [<a href="#B4-sensors-19-05564" class="html-bibr">4</a>]).</p>
Full article ">Figure 2
<p>Frequency domain matching process.</p>
Full article ">Figure 3
<p>GOCI Level 1A band 8 data.</p>
Full article ">Figure 4
<p>Validation points distribution (reprinted from [<a href="#B4-sensors-19-05564" class="html-bibr">4</a>]).</p>
Full article ">Figure 5
<p>Overall test procedures.</p>
Full article ">Figure 6
<p>Example of setting the matching area (2011.04.05 UTC 03:00 Slot 14,15).</p>
Full article ">Figure 7
<p>Final geometric correction process using frequency domain matching.</p>
Full article ">Figure 8
<p>Example of Pair A (2011.04.05 Coordinated Universal Time (UTC) 03:00 slots 2–5, slots 8–9).</p>
Full article ">Figure 9
<p>Example of the Pair A gradient matrix (2011.04.05 UTC 03:00 Slots 2–5).</p>
Full article ">Figure 10
<p>Level 1B image generated by Orientation Correlation (OC) frequency domain matching. ((<b>a</b>)–(<b>c</b>) are the magnified images of some region).</p>
Full article ">Figure 11
<p>Comparison of seam lines between slots in each mosaic image (2011.04.05 UTC 03:00).</p>
Full article ">Figure 11 Cont.
<p>Comparison of seam lines between slots in each mosaic image (2011.04.05 UTC 03:00).</p>
Full article ">
13 pages, 6223 KiB  
Article
A Quad-Constellation GNSS Navigation Algorithm with Colored Noise Mitigation
by Xianqiang Cui, Tianhang Gao and Changsheng Cai
Sensors 2019, 19(24), 5563; https://doi.org/10.3390/s19245563 - 16 Dec 2019
Cited by 4 | Viewed by 2813
Abstract
The existence of colored noise in kinematic positioning will greatly degrade the accuracy of position solutions. This paper proposes a Kalman filter-based quad-constellation global navigation satellite system (GNSS) navigation algorithm with colored noise mitigation. In this algorithm, the observation colored noise and state [...] Read more.
The existence of colored noise in kinematic positioning will greatly degrade the accuracy of position solutions. This paper proposes a Kalman filter-based quad-constellation global navigation satellite system (GNSS) navigation algorithm with colored noise mitigation. In this algorithm, the observation colored noise and state colored noise models are established by utilizing their residuals in the past epochs, and then the colored noise is predicted using the models for mitigation in the current epoch in the integrated Global Positioning System (GPS)/GLObal NAvigation Satellite System (GLONASS)/BeiDou Navigation Satellite System (BDS)/Galileo navigation. Kinematic single point positioning (SPP) experiments under different satellite visibility conditions and road patterns are conducted to evaluate the effect of colored noise on the positioning accuracy for the quad-constellation combined navigation. Experiment results show that the colored noise model can fit the colored noise more effectively in the case of good satellite visibility. As a result, the positioning accuracy improvement is more significant after handling the colored noise. The three-dimensional positioning accuracy can be improved by 25.1%. Different satellite elevation cut-off angles of 10º, 20º and 30º are set to simulate different satellite visibility situations. Results indicate that the colored noise is decreased with the increment of the elevation cut-off angle. Consequently, the improvement of the SPP accuracy after handling the colored noise is gradually reduced from 27.3% to 16.6%. In the cases of straight and curved roads, the quad-constellation GNSS-SPP accuracy can be improved by 22.1% and 25.7% after taking the colored noise into account. The colored noise can be well-modeled and mitigated in both the straight and curved road conditions. Full article
(This article belongs to the Section State-of-the-Art Sensors Technologies)
Show Figures

Figure 1

Figure 1
<p>Quad-constellation global navigation satellite system single point positioning (GNSS-SPP) algorithm with mitigation of colored noise.</p>
Full article ">Figure 2
<p>Vehicle running route produced by Google Earth for the quad-constellation navigation test.</p>
Full article ">Figure 3
<p>Equipment setup and navigation test environment: (<b>A</b>) Base station; (<b>B</b>) Rover station; (<b>C</b>) Open-sky road; (<b>D</b>) Signal blocked road.</p>
Full article ">Figure 4
<p>Number of satellites and position dilution of precision (PDOP) values under different satellite visibility conditions.</p>
Full article ">Figure 5
<p>Residuals and colored noise under good satellite visibility condition: (<b>a</b>) Observation residuals and predicted colored noise; (<b>b</b>) State residuals and predicted colored noise.</p>
Full article ">Figure 6
<p>Residuals and colored noise under poor satellite visibility condition: (<b>a</b>) Observation residuals and predicted colored noise; (<b>b</b>) State residuals and predicted colored noise.</p>
Full article ">Figure 7
<p>Quad-constellation positioning errors with and without the correction of colored noise under different satellite visibility conditions.</p>
Full article ">Figure 8
<p>Quad-constellation positioning errors with and without the correction of colored noise at different elevation cut-off angles.</p>
Full article ">Figure 9
<p>Quad-constellation SPP errors with and without correction of colored noise under different road patterns.</p>
Full article ">
14 pages, 4630 KiB  
Article
Compact Multifunctional Wireless Capacitive Sensor System and Its Application in Icing Detection
by Wolfgang Stocksreiter and Hubert Zangl
Sensors 2019, 19(24), 5562; https://doi.org/10.3390/s19245562 - 16 Dec 2019
Cited by 5 | Viewed by 3678
Abstract
When sensors are used for the monitoring of surfaces, for example, with respect to ice aggregation, it is of interest to have a full coverage of the surface without sensor-inherent detection gaps, so-called “blind spots”. Since the components of such a sensor, like [...] Read more.
When sensors are used for the monitoring of surfaces, for example, with respect to ice aggregation, it is of interest to have a full coverage of the surface without sensor-inherent detection gaps, so-called “blind spots”. Since the components of such a sensor, like antennas and energy harvesting, also require space on the surface, the actual, effective sensing area is usually much smaller than the total surface of the device. Consequently, with an array of such sensors, it is not possible to monitor the entire surface of an object without gaps, even if the sensors are mounted directly adjacent to each other. Furthermore, the excessive size may also prevent the application of a single sensor in space constraint situations as they occur, e.g., on aircrafts. This article investigates a sensor concept in which the electrodes of the sensors are used for both radio data transmission and energy harvesting at the same time. Thus, the wireless data transmission in the 2.45 GHz Industrial, Scientific and Medical (ISM) band is combined with the sensor electrodes and also with a photovoltaic cell for energy harvesting. The combination of sensor technology, communication, and harvesting enables a compact system and thus reduces blind spots to a minimum. In the following article, the structure and functionality of a system is described and verified by laboratory experiments. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Wireless ice sensor mounted on the outer skin of an aircraft. Photo “Wireless and Flexible Ice Detection on Aircraft” [<a href="#B8-sensors-19-05562" class="html-bibr">8</a>]. (<b>b</b>) Different exposed positions of an airplane. If ice occurs, these sensors wirelessly transmit data to the base station inside the cockpit.</p>
Full article ">Figure 2
<p>(<b>a</b>) Prototype setup with electrodes (E1 to E4), source electrode, and combined patch antenna (E5). The capacitance measurement chip (capacitive–digital-converter CDC) and processor with integrated Bluetooth radio (system-on-chip SOC). A 250 kHz source signal and 2.45 GHz radio signal combined via filter and signal combiner. (<b>b</b>) Sensor electrodes (E1 to E4) with different copper areas arranged around the source electrode and patch antenna (E5) in the middle of the configuration.</p>
Full article ">Figure 3
<p>(<b>a</b>) The originally developed patch antenna (ANT1) without sensor electronics. (<b>b</b>) The patch antenna with four sensor electrodes arranged around the transmit patch. (<b>c</b>) The backside of this antenna (ANT2). All five electrodes as well as the ground electrode were guided to a pin header to either open or short the electrodes. (<b>d</b>) The antenna like ANT2, however with an insulated photovoltaic (PV) cell on the top of the central transmitting electrode named (ANT3).</p>
Full article ">Figure 4
<p>Design of the prototype sensor. The antenna and the electrode structure were on the top of the 640 µm Rogers RO3006 substrate. Electronic components and wiring were on the 200 µm FR4 substrate. The PV cell was located directly above the patch antenna. The PV cell and the electrode structure were insulated by a PVC foil.</p>
Full article ">Figure 5
<p>Sensor prototype inside the climate chamber, with mounted plastic foil at the sensor electrodes and a plastic water tank above. Wireless data transmission, acquisition via Bluetooth dongle nRF51 outside the chamber at a distance of 4 m.</p>
Full article ">Figure 6
<p>Comparison of the return losses S11 of the different antenna configurations. Green: the original antenna ANT1 with a large ground plane, without electrodes positioned around the antenna patch. Blue: the antenna with electrodes around. Red: the antenna including PV cell above the antenna patch. Solid lines: floating electrodes; dashed: shorted electrodes to ground.</p>
Full article ">Figure 7
<p>Comparison of antenna gains with and without electrodes and PV cell. (<b>a</b>) The radiation pattern of the antenna with the highest gain, without electrodes. Since the characteristic was similar for all five antennas, only the antenna with the maximum gain is displayed. The diagram (<b>b</b>) shows the antenna gain of all five antenna configurations.</p>
Full article ">Figure 8
<p>Measurement of transmission losses S21 in the anechoic chamber with a broadband dipole receive antenna (UHA9125D).</p>
Full article ">Figure 9
<p>Received power at the Schwarzbeck dipole antenna in horizontal antenna orientation. Green: ANT1 with only the patch antenna; blue: ANT2 with electrodes; and red: ANT3 with PV-Cell. 0 dBm feed power at the Device Under Test DUT. The continuous line indicates the received power with floating electrodes. The dashed line indicates the power of the shorted ones.</p>
Full article ">Figure 10
<p>Measured capacitances between the transmitting electrode E5 and the four receiving electrodes E1 to E4. The values were measured with an LCR bridge. If there was moisture on the electrodes, the capacitance increased rapidly. The blue colored area represents the measuring range of the CDC used.</p>
Full article ">Figure 11
<p>Measured differential capacitances of the prototype system between the receive electrodes E1 to E4 and the transmit electrode E5 depending on the temperature between +30 and −40 °C at a water level of 3 mm.</p>
Full article ">Figure 12
<p>Measurement setup in the climatic chamber to investigate the environmental influences on the antenna of the sensor system. Inside the chamber there was the sensor prototype antenna, and at a distance of 1.2 m there was a FR4 patch antenna. Transmission S21 and return loss S11 were measured with a network analyzer.</p>
Full article ">Figure 13
<p>Frequency and impedance of the antenna shifted as a function of environmental influences. Melting ice had the greatest influence on the radio performance.</p>
Full article ">
17 pages, 7045 KiB  
Article
Novel and Automatic Rice Thickness Extraction Based on Photogrammetry Using Rice Edge Features
by Yuchen Kong, Shenghui Fang, Xianting Wu, Yan Gong, Renshan Zhu, Jian Liu and Yi Peng
Sensors 2019, 19(24), 5561; https://doi.org/10.3390/s19245561 - 16 Dec 2019
Cited by 8 | Viewed by 3082
Abstract
The dimensions of phenotyping parameters such as the thickness of rice play an important role in rice quality assessment and phenotyping research. The objective of this study was to propose an automatic method for extracting rice thickness. This method was based on the [...] Read more.
The dimensions of phenotyping parameters such as the thickness of rice play an important role in rice quality assessment and phenotyping research. The objective of this study was to propose an automatic method for extracting rice thickness. This method was based on the principle of binocular stereovision but avoiding the problem that it was difficult to directly match the corresponding points for 3D reconstruction due to the lack of texture of rice. Firstly, the shape features of edge, instead of texture, was used to match the corresponding points of the rice edge. Secondly, the height of the rice edge was obtained by way of space intersection. Finally, the thickness of rice was extracted based on the assumption that the average height of the edges of multiple rice is half of the thickness of rice. According to the results of the experiments on six kinds of rice or grain, errors of thickness extraction were no more than the upper limit of 0.1 mm specified in the national industry standard. The results proved that edge features could be used to extract rice thickness and validated the effectiveness of the thickness extraction algorithm we proposed, which provided technical support for the extraction of phenotyping parameters for crop researchers. Full article
(This article belongs to the Special Issue Advanced Sensor Technologies for Crop Phenotyping Application)
Show Figures

Figure 1

Figure 1
<p>Experiment equipment.</p>
Full article ">Figure 2
<p>Experimental equipment: (<b>a</b>) left image of the stereo pairs; (<b>b</b>) right image of the stereo pairs.</p>
Full article ">Figure 3
<p>Top view of rice.</p>
Full article ">Figure 4
<p>Side view of rice.</p>
Full article ">Figure 5
<p>Schematic diagram of collinear equation.</p>
Full article ">Figure 6
<p>One of the images of rice.</p>
Full article ">Figure 7
<p>Flow chart for obtaining thickness.</p>
Full article ">Figure 8
<p>Process of matching the corresponding rice.</p>
Full article ">Figure 9
<p>n distances from the centroid to edge points.</p>
Full article ">Figure 10
<p>Resampling the distances.</p>
Full article ">Figure 11
<p>Steps of rice extraction from images: (<b>a</b>) original image; (<b>b</b>) image after graying; (<b>c</b>)image after filtering and denoising; (<b>d</b>) image after binarization; (<b>e</b>) removing broken rice; (<b>f</b>)removing clumped rice.</p>
Full article ">Figure 12
<p>Matching of the corresponding points.</p>
Full article ">Figure 13
<p>Result of rice extraction: (<b>a</b>) original rice image; (<b>b</b>) the extracted part (rice) was filled with black, but still a small part of white remained.</p>
Full article ">Figure 14
<p>The approximation degree between the true value and the calculated value extracted from different amounts of samples.</p>
Full article ">Figure 15
<p>Effect of the base-height ratio on thickness accuracy.</p>
Full article ">
15 pages, 4676 KiB  
Article
Electrochemical Detection of C-Reactive Protein in Human Serum Based on Self-Assembled Monolayer-Modified Interdigitated Wave-Shaped Electrode
by Somasekhar R. Chinnadayyala, Jinsoo Park, Young Hyo Kim, Seong Hye Choi, Sang-Myung Lee, Won Woo Cho, Ga-Yeon Lee, Jae-Chul Pyun and Sungbo Cho
Sensors 2019, 19(24), 5560; https://doi.org/10.3390/s19245560 - 16 Dec 2019
Cited by 23 | Viewed by 5480
Abstract
An electrochemical capacitance immunosensor based on an interdigitated wave-shaped micro electrode array (IDWµE) for direct and label-free detection of C-reactive protein (CRP) was reported. A self-assembled monolayer (SAM) of dithiobis (succinimidyl propionate) (DTSP) was used to modify the electrode array for antibody immobilization. [...] Read more.
An electrochemical capacitance immunosensor based on an interdigitated wave-shaped micro electrode array (IDWµE) for direct and label-free detection of C-reactive protein (CRP) was reported. A self-assembled monolayer (SAM) of dithiobis (succinimidyl propionate) (DTSP) was used to modify the electrode array for antibody immobilization. The SAM functionalized electrode array was characterized morphologically by atomic force microscopy (AFM) and energy dispersive X-ray spectroscopy (EDX). The nature of gold-sulfur interactions on SAM-treated electrode array was probed by X-ray photoelectron spectroscopy (XPS). The covalent linking of anti-CRP-antibodies onto the SAM modified electrode array was characterized morphologically through AFM, and electrochemically through cyclic voltammetry (CV) and electrochemical impedance spectroscopy (EIS). The application of phosphate-buffered saline (PBS) and human serum (HS) samples containing different concentrations of CRP in the electrode array caused changes in the electrode interfacial capacitance upon CRP binding. CRP concentrations in PBS and HS were determined quantitatively by measuring the change in capacitance (ΔC) through EIS. The electrode immobilized with anti-CRP-antibodies showed an increase in ΔC with the addition of CRP concentrations over a range of 0.01–10,000 ng mL−1. The electrode showed detection limits of 0.025 ng mL−1 and 0.23 ng mL−1 (S/N = 3) in PBS and HS, respectively. The biosensor showed a good reproducibility (relative standard deviation (RSD), 1.70%), repeatability (RSD, 1.95%), and adequate selectivity in presence of interferents towards CRP detection. The sensor also exhibited a significant storage stability of 2 weeks at 4 °C in 1× PBS. Full article
(This article belongs to the Section Chemical Sensors)
Show Figures

Figure 1

Figure 1
<p>Microscopic image of the IDWµE fabricated on a glass slide with dimensions of 16 mm × 63 mm × 1.1 mm. (<b>a</b>) The electrode array microscopic images at Low magnifications (100×) and high magnifications (200×) showing a 30 µm width for finger and spacing, respectively. (<b>b</b>) A schematic illustration of the DTSP functionalization and immobilization of anti-CRP-antibodies onto the IDWµE array. (<b>c</b>) The underlying working principle of the CRP immunosensor based on quantifying the total capacitance of the sensor after sequential formation of SAM, anti-CRP antibody, BSA, and CRP layers; (where C1 = C<sub>SAM</sub>, C2 = C<sub>anti-CRP-Ab</sub>, C3 = C<sub>CRP</sub>). (<b>d</b>) The sequential surface modification steps of the IDWµE array for immunosensing of CRP.</p>
Full article ">Figure 2
<p>X-ray photoelectron spectroscopy spectra (XPS) of DTSP-SAM on IDWµE: (<b>a</b>) S2p and C1s (<b>b</b>) bands.</p>
Full article ">Figure 3
<p>AFM topographical images of DTSP/IDWµE (<b>a</b>), DTSP/IDWµE (<b>b</b>), anti-CRP-antibody/DTSP/IDWµE (<b>c</b>), and CRP/BSA/anti-CRP-antibody/DTSP/IDWµE arrays (<b>d</b>) scanned at a rate of 0.5 Hz with topographic profile.</p>
Full article ">Figure 4
<p>Effect of pH (<b>a</b>), concentration of Anti-CRP-Ab (<b>b</b>) and the incubation time (<b>c</b>) on the response of the immunosensor to 1 ng mL<sup>−1</sup> of CRP analyte.</p>
Full article ">Figure 5
<p>CV (<b>a</b>) and Nyquist plot (<b>b</b>) of bare IDWµE (<span class="html-italic">i</span>), DTSP/IDWµE (<span class="html-italic">ii</span>), anti-CRP-antibody/DTSP/IDWµE (<span class="html-italic">iii</span>) and BSA/anti-CRP-antibody/DTSP/IDWµE arrays (<span class="html-italic">iv</span>) and BSA/anti-CRP-antibody/DTSP/IDWµE arrays with 0.1 ng mL<sup>−</sup><sup>1</sup> of CRP (<span class="html-italic">v</span>) in 5 mM K<sub>3</sub>Fe(CN)<sub>6</sub>/K<sub>4</sub>Fe(CN)<sub>6</sub> (1:1) and 0.1 M KCl in 1× PBS.</p>
Full article ">Figure 6
<p>Bode plot of the impedance magnitude (∣Z∣) (<b>a</b>) and reactive capacitance (C) (<b>b</b>) obtained at IDWµE (<span class="html-italic">i</span>), DTSP/IDWµE (<span class="html-italic">ii</span>), anti-CRP-antibody/DTSP/IDWµE (<span class="html-italic">iii</span>) BSA/anti-CRP-antibody/DTSP/IDWµE arrays (<span class="html-italic">iv</span>) and BSA/anti-CRP-antibody/DTSP/IDWµE array with 0.1 ng mL<sup>−</sup><sup>1</sup> (<span class="html-italic">v</span>) of CRP at various modification stages of the electrode surface for the detection of CRP in 1× PBS.</p>
Full article ">Figure 7
<p>Normalized capacitance (|∆C|) of the BSA/anti-CRP-antibodies/DTSP/IDWµE array measured with respect to increased CRP concentrations diluted in 1× PBS (<b>a</b>); Calibration plot for |∆C|<sub>at 10Hz</sub> with increasing CRP concentrations diluted in 1× PBS and ranging from 0.01 to 10,000 ng mL<sup>−1</sup> (<b>b</b>). Each data point represents the average of three values (<span class="html-italic">n</span> = 3), with the range indicated by error bars.</p>
Full article ">Figure 8
<p>Interference study of the BSA/anti-CRP-antibodies/DTSP/IDWµE array with different interfering agents (Human chorionic gonadotrophin (HCG); Insulin; Cardiac troponin-I (cTn)) in 1× PBS, pH 7.4 at a concentration of 0.1 ng mL<sup>−1</sup> and a frequency of 10 Hz (<b>a</b>); Storage stability study of the BSA/anti-CRP-antibodies/DTSP/IDWµE array at a regular interval of 1 day in 100 µL of 1× PBS (pH 7.4) at 0.1 ng mL<sup>−1</sup> and a frequency of 10 Hz (<b>b</b>). Each data point represents the average of three values (<span class="html-italic">n</span> = 3), with the range indicated by error bars.</p>
Full article ">Figure 9
<p>Normalized capacitance (|∆C|) of the BSA/anti-CRP-antibodies/DTSP/IDWµE array measured with respect to increasing CRP concentrations diluted in HS (Conc.<sub>HS-CRP</sub>) (<b>a</b>); calibration plot for |∆C|<sub>at 10Hz</sub> with increasing CRP concentrations diluted in HS ranging from 0.01 to 10,000 ng mL<sup>−1</sup> (<b>b</b>). Each data point represents the average of three values (<span class="html-italic">n</span> = 3), with the range indicated by error bars.</p>
Full article ">
12 pages, 2327 KiB  
Article
Improved Classification Method Based on the Diverse Density and Sparse Representation Model for a Hyperspectral Image
by Na Li, Ruihao Wang, Huijie Zhao, Mingcong Wang, Kewang Deng and Wei Wei
Sensors 2019, 19(24), 5559; https://doi.org/10.3390/s19245559 - 16 Dec 2019
Cited by 1 | Viewed by 2715
Abstract
To solve the small sample size (SSS) problem in the classification of hyperspectral image, a novel classification method based on diverse density and sparse representation (NCM_DDSR) is proposed. In the proposed method, the dictionary atoms, which learned from the diverse density model, are [...] Read more.
To solve the small sample size (SSS) problem in the classification of hyperspectral image, a novel classification method based on diverse density and sparse representation (NCM_DDSR) is proposed. In the proposed method, the dictionary atoms, which learned from the diverse density model, are used to solve the noise interference problems of spectral features, and an improved matching pursuit model is presented to obtain the sparse coefficients. Airborne hyperspectral data collected by the push-broom hyperspectral imager (PHI) and the airborne visible/infrared imaging spectrometer (AVIRIS) are applied to evaluate the performance of the proposed classification method. Results illuminate that the overall accuracies of the proposed model for classification of PHI and AVIRIS images are up to 91.59% and 92.83% respectively. In addition, the kappa coefficients are up to 0.897 and 0.91. Full article
(This article belongs to the Special Issue Machine Learning Methods for Image Processing in Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>The manifold model of the diversity density (DD) algorithm.</p>
Full article ">Figure 2
<p>Sparse representation model.</p>
Full article ">Figure 3
<p>Data cubes of experimental images. (<b>a</b>) Data cube of push-broom hyperspectral imager (PHI) and (<b>b</b>) data cube of airborne visible/infrared imaging spectrometer (AVIRIS).</p>
Full article ">Figure 4
<p>Ground truth of research area: (<b>a</b>) ground truth of PHI data and (<b>b</b>) ground truth of AVIRIS data.</p>
Full article ">Figure 5
<p>Classification results of PHI data with different methods (the ratio of the number of samples to the number of bands is 0.5): (<b>a</b>) ground truth; (<b>b</b>) PCA_MinD; (<b>c</b>) SVM and (<b>d</b>) NCM_DDSR.</p>
Full article ">Figure 6
<p>Classification results of Salinas_A (the ratio of the number of samples to the number of bands is 0.5). (<b>a</b>) Ground truth. (<b>b</b>) PCA_MinD. (<b>c</b>) SVM. (<b>d</b>) NCM_DDSR.</p>
Full article ">
22 pages, 6243 KiB  
Article
Citrus Tree Segmentation from UAV Images Based on Monocular Machine Vision in a Natural Orchard Environment
by Yayong Chen, Chaojun Hou, Yu Tang, Jiajun Zhuang, Jintian Lin, Yong He, Qiwei Guo, Zhenyu Zhong, Huan Lei and Shaoming Luo
Sensors 2019, 19(24), 5558; https://doi.org/10.3390/s19245558 - 16 Dec 2019
Cited by 39 | Viewed by 4527
Abstract
The segmentation of citrus trees in a natural orchard environment is a key technology for achieving the fully autonomous operation of agricultural unmanned aerial vehicles (UAVs). Therefore, a tree segmentation method based on monocular machine vision technology and a support vector machine (SVM) [...] Read more.
The segmentation of citrus trees in a natural orchard environment is a key technology for achieving the fully autonomous operation of agricultural unmanned aerial vehicles (UAVs). Therefore, a tree segmentation method based on monocular machine vision technology and a support vector machine (SVM) algorithm are proposed in this paper to segment citrus trees precisely under different brightness and weed coverage conditions. To reduce the sensitivity to environmental brightness, a selective illumination histogram equalization method was developed to compensate for the illumination, thereby improving the brightness contrast for the foreground without changing its hue and saturation. To accurately differentiate fruit trees from different weed coverage backgrounds, a chromatic aberration segmentation algorithm and the Otsu threshold method were combined to extract potential fruit tree regions. Then, 14 color features, five statistical texture features, and local binary pattern features of those regions were calculated to establish an SVM segmentation model. The proposed method was verified on a dataset with different brightness and weed coverage conditions, and the results show that the citrus tree segmentation accuracy reached 85.27% ± 9.43%; thus, the proposed method achieved better performance than two similar methods. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

Figure 1
<p>The general view of the orchard and the physical photo of the unmanned aerial vehicle (UAV). (<b>a</b>) General view of the citrus orchard captured by a UAV with an operation height of 100 m. (<b>b</b>) The physical photo of the agricultural UAV.</p>
Full article ">Figure 2
<p>In sufficient brightness (IB) and sufficient brightness (SB) examples of dataset 0. (<b>a</b>) Positive area sample of fruit trees under SB conditions. (<b>b</b>) Negative area sample of soil or withered grass under SB conditions. (<b>c</b>) Negative area sample of weeds under SB conditions. (<b>d</b>) Positive area sample of fruit trees under IB conditions. (<b>e</b>) Negative area sample of soil or withered grass under IB conditions. (<b>f</b>) Negative area sample of weeds under IB conditions.</p>
Full article ">Figure 3
<p>Flow chart of the proposed segmentation method.</p>
Full article ">Figure 4
<p>R (the red component in RGB color space), G (the green component), G–R (the traditional G–R chromatic value), and <span class="html-italic">im<sub>p</sub></span> (the relative G–R chromatic value) curves of a tree in a small weed coverage rate (SWCR) image. (<b>a</b>) Example of the <span class="html-italic">I<sub>a</sub></span> of a tree in an SWCR image. (<b>b</b>) R, G, R–G, and <span class="html-italic">im<sub>p</sub></span> curves of the line in <span class="html-italic">I<sub>a</sub></span>.</p>
Full article ">Figure 5
<p>Reflection (M) and source (S) images of an example image under large weed coverage rate (LWCR) conditions and their chromatic results. (<b>a</b>)An example image under LWCR conditions of S mapping <span class="html-italic">S</span><sub>1</sub>. (<b>b</b>) <span class="html-italic">M</span><sub>1</sub> of <span class="html-italic">S</span><sub>1</sub>, processed by multi-scale retinex (MSR). (<b>c</b>) The G–R, G–B, and 2G–R–B values of the <span class="html-italic">M</span><sub>1</sub> of <span class="html-italic">S</span><sub>1</sub>.</p>
Full article ">Figure 6
<p>Results of two illumination compensation examples. (<b>a</b>) An example under IB conditions, <span class="html-italic">I</span><sub>1</sub>. (<b>b</b>) <span class="html-italic">I</span><sub>1</sub> after applying the histogram equalization (HE). (<b>c</b>) <span class="html-italic">I</span><sub>1</sub> after applying the selective region intensity histogram equalization (SRIHE). (<b>d</b>) An example under IB conditions, <span class="html-italic">I</span><sub>2</sub>. (<b>e</b>) <span class="html-italic">I</span><sub>2</sub> after applying the HE. (<b>f</b>) <span class="html-italic">I</span><sub>2</sub> after applying the SRIHE.</p>
Full article ">Figure 7
<p>Data representation of the fruit tree areas in RGB color space before and after pre-treatment (SW, MW, and LW mean SWCR, medium weed coverage rate (MWCR), and LWCR conditions, respectively).</p>
Full article ">Figure 8
<p>Extracted result of fruit trees in SWCR images using the RG chromatic extraction method (ERGCM). (<b>a</b>) An example SWCR image in RGB color space. (<b>b</b>) The transformed result of (<b>a</b>) using relative chromatic mapping. (<b>c</b>) Separated result of (<b>b</b>) using the Otsu method. (<b>d</b>) The result of (<b>c</b>) using morphological treatments.</p>
Full article ">Figure 9
<p>Re-extracted results of the EMSRCM. (<b>a</b>) An example image in RGB color space. (<b>b</b>) Grayscale images transformed by MSR and 2G–R–B chromatic mapping on (<b>a</b>). (<b>c</b>) Filtered result obtained by applying an open filter and a top-hat filter to (<b>b</b>). (<b>d</b>) Separated results obtained by applying the Otsu method to (<b>c</b>). (<b>e</b>) Post-processed result of (<b>d</b>) using morphological treatments. (<b>f</b>) Transformed result of (<b>e</b>) using a convex hull method.</p>
Full article ">Figure 10
<p>ERGCM and EMSRCM results under different WCCs. (<b>a</b>) An example image of SWCR <span class="html-italic">I<sub>a</sub></span>. (<b>b</b>) <span class="html-italic">I<sub>a</sub></span> processed by the ERGCM. (<b>c</b>) <span class="html-italic">I<sub>a</sub></span> processed by the EMSRCM. (<b>d</b>) An example image of MWCR <span class="html-italic">I<sub>b</sub></span>. (<b>e</b>) <span class="html-italic">I<sub>b</sub></span> processed by the ERGCM. (<b>f</b>) <span class="html-italic">I<sub>b</sub></span> processed by the EMSRCM. (<b>g</b>) An example image of LWCR <span class="html-italic">I<sub>c</sub></span>. (<b>h</b>) <span class="html-italic">I<sub>c</sub></span> processed by the ERGCM. (<b>i</b>) <span class="html-italic">I<sub>c</sub></span> processed by the EMSRCM.</p>
Full article ">Figure 11
<p>Citrus tree segmentation results for the agricultural UAV. (<b>a</b>) Segmentation results under IB and SWCR conditions. (<b>b</b>) Segmentation results under SB and SWCR conditions. (<b>c</b>) Segmentation results under IB and MWCR conditions. (<b>d</b>) Segmentation results under SB and MWCR conditions. (<b>e</b>) Segmentation results under IB and LWCR conditions. (<b>f</b>) Segmentation results under SB and LWCR conditions. The red circles represent correct results in the images, blue circles denote erroneously segmented regions, and green squares indicate missing trees.</p>
Full article ">Figure 12
<p>The receiver operating characteristic(ROC) curves of the support vector machine (SVM) under different WCCs on the test set.</p>
Full article ">
30 pages, 1451 KiB  
Article
An Efficient Routing Protocol Based on Stretched Holding Time Difference for Underwater Wireless Sensor Networks
by Zahid Wadud, Khadem Ullah, Abdul Baseer Qazi, Sadeeq Jan, Farrukh Aslam Khan and Nasru Minallah
Sensors 2019, 19(24), 5557; https://doi.org/10.3390/s19245557 - 16 Dec 2019
Cited by 10 | Viewed by 3873
Abstract
Underwater Wireless Sensors Networks (UWSNs) use acoustic waves as a communication medium because of the high attenuation to radio and optical waves underwater. However, acoustic signals lack propagation speed as compared to radio or optical waves. In addition, the UWSNs also pose various [...] Read more.
Underwater Wireless Sensors Networks (UWSNs) use acoustic waves as a communication medium because of the high attenuation to radio and optical waves underwater. However, acoustic signals lack propagation speed as compared to radio or optical waves. In addition, the UWSNs also pose various intrinsic challenges, i.e., frequent node mobility with water currents, high error rate, low bandwidth, long delays, and energy scarcity. Various UWSN routing protocols have been proposed to overcome the above-mentioned challenges. Vector-based routing protocols confine the communication within a virtual pipeline for the sake of directionality and define a fixed pipeline radius between the source node and the centerline station. Energy-Scaled and Expanded Vector-Based Forwarding (ESEVBF) protocol limits the number of duplicate packets by expanding the holding time according to the propagation delay, and thus reduces the energy consumption via the remaining energy of Potential Forwarding Nodes (PFNs) at the first hop. The holding time mechanism of ESEVBF is restricted only to the first-hop PFNs of the source node. The protocol fails when there is a void or energy hole at the second hop, affecting the reliability of the system. Our proposed protocol, Extended Energy-Scaled and Expanded Vector-Based Forwarding Protocol (EESEVBF), exploits the holding time mechanism to suppress duplicate packets. Moreover, the proposed protocol tackles the hidden terminal problem due to which a reasonable reduction in duplicate packets initiated by the reproducing nodes occurs. The holding time is calculated based on the following four parameters: (i) the distance from the boundary of the transmission area relative to the PFNs’ inverse energy at the 1st and 2nd hop, (ii) distance from the virtual pipeline, (iii) distance from the source to the PFN at the second hop, and (iv) distance from the first-hop PFN to its destination. Therefore, the proposed protocol stretches the holding time difference based on two hops, resulting in lower energy consumption, decreased end-to-end delay, and increased packet delivery ratio. The simulation results demonstrate that compared to ESEVBF, our proposed protocol EESEVBF experiences 20.2 % lesser delay, approximately 6.66 % more energy efficiency, and a further 11.26 % reduction in generating redundant packets. Full article
(This article belongs to the Special Issue Underwater Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>Forwarder Node Selection Scenario.</p>
Full article ">Figure 2
<p>Hidden Terminal Problem Scenario.</p>
Full article ">Figure 3
<p>Holding time difference relationship with propagation delay.</p>
Full article ">Figure 4
<p>Network architecture.</p>
Full article ">Figure 5
<p><math display="inline"><semantics> <msub> <mi>V</mi> <mrow> <mi>a</mi> <mi>c</mi> <mi>o</mi> <mi>u</mi> <mi>s</mi> <mi>t</mi> <mi>i</mi> <mi>c</mi> </mrow> </msub> </semantics></math> vs. Temp.</p>
Full article ">Figure 6
<p><math display="inline"><semantics> <msub> <mi>V</mi> <mrow> <mi>a</mi> <mi>c</mi> <mi>o</mi> <mi>u</mi> <mi>s</mi> <mi>t</mi> <mi>i</mi> <mi>c</mi> </mrow> </msub> </semantics></math> vs. Salinity.</p>
Full article ">Figure 7
<p>Forwarding Scenario.</p>
Full article ">Figure 8
<p>Holding time estimation.</p>
Full article ">Figure 9
<p>Ef vs. Normalized Energy.</p>
Full article ">Figure 10
<p>Forwarder Node Selection Scenario.</p>
Full article ">Figure 11
<p>Number of Nodes vs. PDR.</p>
Full article ">Figure 12
<p>Number of Nodes vs. PDR.</p>
Full article ">Figure 13
<p>Number of Nodes vs. Energy Consumption.</p>
Full article ">Figure 14
<p>Number of Nodes vs. Energy Consumption.</p>
Full article ">Figure 15
<p>Number of Nodes vs. Energy Consumption.</p>
Full article ">Figure 16
<p>Number of Nodes vs. Data Copies Forwarded.</p>
Full article ">Figure 17
<p>Number of Nodes vs. Data Copy Forwarded.</p>
Full article ">Figure 18
<p>Number of Nodes vs. Data Copy Forwarded.</p>
Full article ">Figure 19
<p>Feasible Region.</p>
Full article ">Figure 20
<p>Number of Nodes vs. End-to-End delay.</p>
Full article ">Figure 21
<p>Number of Nodes vs. End-to-End delay.</p>
Full article ">Figure 22
<p>Number of Nodes vs. End-to-End delay.</p>
Full article ">
11 pages, 1846 KiB  
Article
Hydrogel Microparticles Functionalized with Engineered Escherichia coli as Living Lactam Biosensors
by Conghui Ma, Jie Li, Boyin Zhang, Chenxi Liu, Jingwei Zhang and Yifan Liu
Sensors 2019, 19(24), 5556; https://doi.org/10.3390/s19245556 - 16 Dec 2019
Cited by 12 | Viewed by 5180
Abstract
Recently there has been an increasing need for synthesizing valued chemicals through biorefineries. Lactams are an essential family of commodity chemicals widely used in the nylon industry with annual production of millions of tons. The bio-production of lactams can substantially benefit from high-throughput [...] Read more.
Recently there has been an increasing need for synthesizing valued chemicals through biorefineries. Lactams are an essential family of commodity chemicals widely used in the nylon industry with annual production of millions of tons. The bio-production of lactams can substantially benefit from high-throughput lactam sensing strategies for lactam producer screening. We present here a robust and living lactam biosensor that is directly compatible with high-throughput analytical means. The biosensor is a hydrogel microparticle encapsulating living microcolonies of engineered lactam-responsive Escherichia coli. The microparticles feature facile and ultra-high throughput manufacturing of up to 10,000,000 per hour through droplet microfluidics. We show that the biosensors can specifically detect major lactam species in a dose-dependent manner, which can be quantified using flow cytometry. The biosensor could potentially be used for high-throughput metabolic engineering of lactam biosynthesis. Full article
(This article belongs to the Special Issue Lab-on-a-Chip Technology)
Show Figures

Figure 1

Figure 1
<p>Schematic illustration of the living lactam biosensors. (<b>a</b>) The ChnR/Pb transcription factor-promoter pair-based lactam sensing pathway engineered in <span class="html-italic">E. coli</span>. (<b>b</b>) The hydrogel-based living biosensors generated by encapsulating engineered <span class="html-italic">E. coli</span>. cells into agarose microgels. The recovered microgels are further incubated to form colonies and express mCherry fluorescent protein if lactam is present in the environment, which can be fluorescently detected.</p>
Full article ">Figure 2
<p>Microfluidic manufacturing of the living lactam biosensors. (<b>a</b>) The layout of the droplet microfluidic device for the generation of agarose microgels with encapsulated engineered <span class="html-italic">E. coli</span> cells. (<b>b</b>,<b>c</b>) Microscopic images of (<b>b</b>) the as-generated agarose-in-oil droplets and (<b>c</b>) the released microgel beads suspended in an aqueous solution. (<b>d</b>) Stacked fluorescent micrograph showing microcolonies expressing mCherry proteins in the microgels induced by 50 mM caprolactam. Scale bars: 50 µm.</p>
Full article ">Figure 3
<p>Biosensor characterization. (<b>a</b>) Biosensor response to non-target and target chemicals. The concentration of all the chemicals are kept to 50 mM. The symbols and error bars refer to the mean and standard deviation of the values from separate microgel sensors (<span class="html-italic">n</span> = 400). (<b>b</b>) Influence of incubation time on the biosensing behavior of different target lactam species. The dash guidelines highlight the exponential and saturation regime, respectively. The shadowed area represents the extent of standard deviation of all data sets.</p>
Full article ">Figure 4
<p>Biosensor calibration for dose-dependent lactam detection. (<b>a</b>) Fluorescent-activated cell sorting (FACS) histograms of the fluorescence intensity of individual microgel sensors subject to various levels of valerolactam. Each histogram collects the data from 600 microgels. (<b>b</b>) Dose-response plot of lactam biosensing. The dash guidelines highlight the linear trend of the dose-response relations. The shadowed area represents the extent of standard deviation of all data sets.</p>
Full article ">
16 pages, 8281 KiB  
Article
A LS-SVM based Measurement Points Classification Algorithm for Adjacent Targets in WSNs
by Xiang Wang, Zong-Min Zhao, Tao Wang, Zhun Zhang, Qiang Hao and Xiao-Ying Li
Sensors 2019, 19(24), 5555; https://doi.org/10.3390/s19245555 - 16 Dec 2019
Cited by 1 | Viewed by 2517
Abstract
In wireless sensor networks (WSNs), the problem of measurement origin uncertainty for observed data has a significant impact on the precision of multi-target tracking. In this paper, a novel algorithm based on least squares support vector machine (LS-SVM) is proposed to classify measurement [...] Read more.
In wireless sensor networks (WSNs), the problem of measurement origin uncertainty for observed data has a significant impact on the precision of multi-target tracking. In this paper, a novel algorithm based on least squares support vector machine (LS-SVM) is proposed to classify measurement points for adjacent targets. Extended Kalman filter (EKF) algorithm is firstly adopted to compute the predicted classification line for each sampling period, which will be used to classify sampling points and calculate observed centers of closely moving targets. Then LS-SVM algorithm is utilized to train the classified points and get the best classification line, which will then be the reference classification line for the next sampling period. Finally, the locations of the targets will be precisely estimated by using observed centers based on EKF. A series of simulations validate the feasibility and accuracy of the new algorithm, while the experimental results verify the efficiency and effectiveness of the proposal. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>Flowchart of the proposed algorithm.</p>
Full article ">Figure 2
<p>Transformation for the classification line.</p>
Full article ">Figure 3
<p>Parameter Selection for γ.</p>
Full article ">Figure 4
<p>The classification process.</p>
Full article ">Figure 5
<p>The misclassification probability.</p>
Full article ">Figure 6
<p>RMSEs across 100 Monte Carlo simulations.</p>
Full article ">Figure 7
<p>Experimental setup: the distribution of the sensor. (<b>a</b>) The distribution of sensors; (<b>b</b>) The experimental scene.</p>
Full article ">Figure 8
<p>The details of the sensor.</p>
Full article ">Figure 9
<p>Tracking comparison among the algorithms.</p>
Full article ">Figure 10
<p>Positional errors for these the algorithms. (<b>a</b>) Positional errors with these algorithms for target A; (<b>b</b>) Positional errors with these algorithms for target B.</p>
Full article ">Figure 11
<p>RMSE of 60 trajectories for targets A and B. (<b>a</b>) RMSE of these algorithms for Target A; (<b>b</b>) RMSE of these algorithms for Target B.</p>
Full article ">
13 pages, 2078 KiB  
Article
A Pilot Study on Falling-Risk Detection Method Based on Postural Perturbation Evoked Potential Features
by Shenglong Jiang, Hongzhi Qi, Jie Zhang, Shufeng Zhang, Rui Xu, Yuan Liu, Lin Meng and Dong Ming
Sensors 2019, 19(24), 5554; https://doi.org/10.3390/s19245554 - 16 Dec 2019
Cited by 1 | Viewed by 3095
Abstract
In the human-robot hybrid system, due to the error recognition of the pattern recognition system, the robot may perform erroneous motor execution, which may lead to falling-risk. While, the human can clearly detect the existence of errors, which is manifested in the central [...] Read more.
In the human-robot hybrid system, due to the error recognition of the pattern recognition system, the robot may perform erroneous motor execution, which may lead to falling-risk. While, the human can clearly detect the existence of errors, which is manifested in the central nervous activity characteristics. To date, the majority of studies on falling-risk detection have focused primarily on computer vision and physical signals. There are no reports of falling-risk detection methods based on neural activity. In this study, we propose a novel method to monitor multi erroneous motion events using electroencephalogram (EEG) features. There were 15 subjects who participated in this study, who kept standing with an upper limb supported posture and received an unpredictable postural perturbation. EEG signal analysis revealed a high negative peak with a maximum averaged amplitude of −14.75 ± 5.99 μV, occurring at 62 ms after postural perturbation. The xDAWN algorithm was used to reduce the high-dimension of EEG signal features. And, Bayesian linear discriminant analysis (BLDA) was used to train a classifier. The detection rate of the falling-risk onset is 98.67%. And the detection latency is 334ms, when we set detection rate beyond 90% as the standard of dangerous event onset. Further analysis showed that the falling-risk detection method based on postural perturbation evoked potential features has a good generalization ability. The model based on typical event data achieved 94.2% detection rate for unlearned atypical perturbation events. This study demonstrated the feasibility of using neural response to detect dangerous fall events. Full article
(This article belongs to the Special Issue Machine Learning for Biomedical Imaging and Sensing)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Experimental paradigm and equipment. (<b>b</b>) Typical force in Newtons applied on an assistance handle overtime during a balance perturbation experiment run. Force was measured using a sensor on the handle. In the preparation stage, the subjects placed their hands on handles and leaned forward, gradually moving their center of gravity towards the handles. In the instability stage, unpredictable airbag exhaust breaks the original body balance of subjects, subjects responded to the postural perturbation and try to restore balance. In the end, the subjects relaxed and returned to natural upright standing.</p>
Full article ">Figure 2
<p>Synchronously (<b>a</b>) typical handle reaction force, (<b>b</b>) grand average FCZ channel ERP and (<b>c</b>) false alarm rate at rest state and detection rate at instability state for the falling-risk event.</p>
Full article ">Figure 3
<p>(<b>a</b>) Grand-average ERP record at midline channels, C3 and C4 channels; (<b>b</b>) Time-spatial topography in rest state and ERP peaking. The perturbation evoked negative potential was mainly located at frontal-central channels, especially mean amplitude peaking at FCZ around 62 ms.</p>
Full article ">Figure 4
<p>(<b>a</b>) Postural perturbation evoked ERP at FCZ channel and sEMG at wrist extensor during the right-side perturbation event. (<b>b</b>) Recognition performance of right-side postural perturbation event based on the left-side dataset; (<b>c</b>) Recognition performance of left-side postural perturbation event based on the right-side dataset.</p>
Full article ">
21 pages, 3201 KiB  
Article
A Convolutional Neural Network for Compound Micro-Expression Recognition
by Yue Zhao and Jiancheng Xu
Sensors 2019, 19(24), 5553; https://doi.org/10.3390/s19245553 - 16 Dec 2019
Cited by 23 | Viewed by 23491
Abstract
Human beings are particularly inclined to express real emotions through micro-expressions with subtle amplitude and short duration. Though people regularly recognize many distinct emotions, for the most part, research studies have been limited to six basic categories: happiness, surprise, sadness, anger, fear, and [...] Read more.
Human beings are particularly inclined to express real emotions through micro-expressions with subtle amplitude and short duration. Though people regularly recognize many distinct emotions, for the most part, research studies have been limited to six basic categories: happiness, surprise, sadness, anger, fear, and disgust. Like normal expressions (i.e., macro-expressions), most current research into micro-expression recognition focuses on these six basic emotions. This paper describes an important group of micro-expressions, which we call compound emotion categories. Compound micro-expressions are constructed by combining two basic micro-expressions but reflect more complex mental states and more abundant human facial emotions. In this study, we firstly synthesized a Compound Micro-expression Database (CMED) based on existing spontaneous micro-expression datasets. These subtle feature of micro-expression makes it difficult to observe its motion track and characteristics. Consequently, there are many challenges and limitations to synthetic compound micro-expression images. The proposed method firstly implemented Eulerian Video Magnification (EVM) method to enhance facial motion features of basic micro-expressions for generating compound images. The consistent and differential facial muscle articulations (typically referred to as action units) associated with each emotion category have been labeled to become the foundation of generating compound micro-expression. Secondly, we extracted the apex frames of CMED by 3D Fast Fourier Transform (3D-FFT). Moreover, the proposed method calculated the optical flow information between the onset frame and apex frame to produce an optical flow feature map. Finally, we designed a shallow network to extract high-level features of these optical flow maps. In this study, we synthesized four existing databases of spontaneous micro-expressions (CASME I, CASME II, CAS(ME)2, SAMM) to generate the CMED and test the validity of our network. Therefore, the deep network framework designed in this study can well recognize the emotional information of basic micro-expressions and compound micro-expressions. Full article
(This article belongs to the Special Issue MEMS Technology Based Sensors for Human Centered Applications)
Show Figures

Figure 1

Figure 1
<p>This Compound Facial Expressions of Emotion (CFEE).</p>
Full article ">Figure 2
<p>The framework of the proposed method.</p>
Full article ">Figure 3
<p>Compound facial expressions in real environments (<b>left</b>: “disgustedly surprised”, <b>right</b>: “fearfully surprised”).</p>
Full article ">Figure 4
<p>The generation process of CMED: (<b>a</b>) Description of Positively Surprised; (<b>b</b>) Description of Positively Negative; (<b>c</b>) Description of Negatively Surprised; (<b>d</b>) Description of Negatively Negative.</p>
Full article ">Figure 4 Cont.
<p>The generation process of CMED: (<b>a</b>) Description of Positively Surprised; (<b>b</b>) Description of Positively Negative; (<b>c</b>) Description of Negatively Surprised; (<b>d</b>) Description of Negatively Negative.</p>
Full article ">Figure 5
<p>The compound micro-expression database.</p>
Full article ">Figure 6
<p>Comparison of ME sequences at different magnification factors.</p>
Full article ">Figure 7
<p>Optical flow maps of six MEs in CASME Ⅱ database.</p>
Full article ">Figure 8
<p>Overall framework of proposed network.</p>
Full article ">Figure 9
<p>Recognition performance using different magnification factor.</p>
Full article ">Figure 10
<p>Comparison of different magnification method.</p>
Full article ">Figure 11
<p>Optical flow feature maps with different λ and <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mi>scales</mi> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 12
<p>Recognition performance using different input graph on CMED. Magnified not magnificated.</p>
Full article ">Figure 13
<p>The measurement of confusion matrix: (<b>a</b>) the basic ME database; (<b>b</b>) the CMED.</p>
Full article ">
17 pages, 8888 KiB  
Article
An Improved Linear Spectral Emissivity Constraint Method for Temperature and Emissivity Separation Using Hyperspectral Thermal Infrared Data
by Xinyu Lan, Enyu Zhao, Zhao-Liang Li, Jélila Labed and Françoise Nerry
Sensors 2019, 19(24), 5552; https://doi.org/10.3390/s19245552 - 16 Dec 2019
Cited by 11 | Viewed by 3475
Abstract
The linear spectral emissivity constraint (LSEC) method has been proposed to separate temperature and emissivity in hyperspectral thermal infrared data with an assumption that land surface emissivity (LSE) can be described by an equal interval piecewise linear function. This paper combines a pre-estimate [...] Read more.
The linear spectral emissivity constraint (LSEC) method has been proposed to separate temperature and emissivity in hyperspectral thermal infrared data with an assumption that land surface emissivity (LSE) can be described by an equal interval piecewise linear function. This paper combines a pre-estimate shape method with the LSEC method to provide an initial-shape estimation of LSE which will create a new piecewise scheme for land surface temperature (LST) and LSE separation. This new scheme is designated as the pre-estimate shape (PES)-LSEC method. Comparisons with the LSEC method using simulated data sets show that the PES-LSEC method has better performance in terms of accuracy for both LSE and LST. With an at-ground error of 0.5 K, the root-mean-square errors (RMSEs) of LST and LSE are 0.07 K and 0.0045, respectively, and with the scale factor of moisture profile 0.8 and 1.2, the RMSEs of LST are 1.11 K and 1.14 K, respectively. The RMSEs of LSE in each channel are mostly below 0.02 and 0.04, respectively, which are better than for the LSEC method. In situ experimental data are adopted to validate our method: The results show that RMSE of LST is 0.9 K and the mean value of LSE accuracy is 0.01. The PES-LSEC method with fewer segments achieves better accuracy than that of LSEC and preserves most of the crest and trough information of emissivity. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>Diagrammatic sketch of piecewise linear emissivity spectra fitting. The red line is an actual emissivity spectrum (from a type of soil), the abscissa is the wavenumber, and the ordinate is the emissivity, while the blue lines are the fitting spectra.</p>
Full article ">Figure 2
<p>Diagrammatic sketch of piecewise linear emissivity spectra fitting. Figure (<b>b</b>) is a partial enlarged view of figure (<b>a</b>). The red line is an actual emissivity spectrum (from a type of soil), while the blue lines are the fitting spectra.</p>
Full article ">Figure 3
<p>Diagrammatic sketch of soil emissivity estimation. The red dot line is the actual emissivity spectrum, while the other lines are the estimated spectra. Estimated Emissivity #1 represents the estimated spectrum calculated with the max (Tg<sub>λ</sub>) using Equation (4). The true land surface temperature (LST) is varied, with +0.5 k, −0.5 K, and −1 K as the estimated LST value, Estimated Emissivity #2, #3, and #4 represent the corresponding estimated spectra.</p>
Full article ">Figure 4
<p>Flow diagram for the pre-estimate shape procedure.</p>
Full article ">Figure 5
<p>Diagrammatic sketch of Der_LSE.</p>
Full article ">Figure 6
<p>Estimated Emissivity #1 represents the estimated LSE calculated with the max (Tg<sub>λ</sub>) using Equation (4) (Black line). Red line (Emissivity #2) is the estimated shape of LSE using the pre-estimate shape procedure.</p>
Full article ">Figure 7
<p>Selected emissivity spectra from the ASTER spectral library.</p>
Full article ">Figure 8
<p>Retrieval results of three spectra (red-orange sandy loam, sea water, and green grass) using PES-LSEC method. The calculation results of Equation (4) with max (Tg<sub>λ</sub>) are drawn in black lines (Emissivity #1). Red lines (Emissivity #2) are the estimated shape of LSE using the pre-estimate shape procedure. Black points are the identified inflection points. Green lines are the true emissivity values used in the simulation. Blue lines are the final retrieval results of emissivity using PES-LSEC method.</p>
Full article ">Figure 9
<p>RMSE<sub>ε</sub><sub>,j</sub> of the two methods.</p>
Full article ">Figure 10
<p>RMSE<sub>ε</sub> and RMSE<sub>T</sub> for LSEC and PES-LSEC method with the at-ground radiance error.</p>
Full article ">Figure 11
<p>RMSE<sub>ε,j</sub> for the 800–1200 cm<sup>−1</sup> region. The blue and black lines are the retrieval results of the PES-LSEC method with the scale factors of moisture profile being −0.2 and 0.2, respectively. The red and green lines are the LSEC results with the scale factors of moisture profile −0.2 and 0.2, respectively. ISSTES08 and ISSTE12 present the RMSE<sub>ε,j</sub> of emissivity using ISSTES method with the scale factors of moisture profile −0.2 and 0.2, respectively.</p>
Full article ">Figure 12
<p>Emissivity spectra of nine samples.</p>
Full article ">Figure 13
<p>Laboratory emissivities of the nine samples and retrieved emissivities using the LSEC and PES-LSEC method.</p>
Full article ">Figure 14
<p><math display="inline"><semantics> <mo>Δ</mo> </semantics></math>T<sub>s</sub> of nine samples.</p>
Full article ">Figure 15
<p>RMSE<sub>ε</sub> of nine emissivity spectrum.</p>
Full article ">
Previous Issue
Back to TopTop