Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 19, July-1
Previous Issue
Volume 19, June-1
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 19, Issue 12 (June-2 2019) – 189 articles

Cover Story (view full-size image): High-resolution (HR) mapping of the gastrointestinal (GI) bioelectrical activity is an emerging method to define GI dysrhythmias such as gastroparesis and functional dyspepsia. Currently, there is no solution available to conduct HR mapping in long-term studies. In this regard, we have developed an implantable 64-channel closed-loop near-field communication system for real-time monitoring of gastric electrical activity. The system is composed of implantable (IU), wearable (WU), and stationary (SU) units. Simultaneous 125 kb/s IU-WU data telemetry and WU-IU adjustable power transfer is carried out through a 13.56 MHz RFID link. The retrieved data at the WU are then either transmitted to the SU for real-time monitoring through a 2.4 GHz RF transceiver or stored locally on a micro-SD memory card. The signals recorded at IU and received by SU are verified by a graphical user interface. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
16 pages, 3276 KiB  
Article
Methods for Simultaneous Robot-World-Hand–Eye Calibration: A Comparative Study
by Ihtisham Ali, Olli Suominen, Atanas Gotchev and Emilio Ruiz Morales
Sensors 2019, 19(12), 2837; https://doi.org/10.3390/s19122837 - 25 Jun 2019
Cited by 58 | Viewed by 10576
Abstract
In this paper, we propose two novel methods for robot-world-hand–eye calibration and provide a comparative analysis against six state-of-the-art methods. We examine the calibration problem from two alternative geometrical interpretations, called ‘hand–eye’ and ‘robot-world-hand–eye’, respectively. The study analyses the effects of specifying the [...] Read more.
In this paper, we propose two novel methods for robot-world-hand–eye calibration and provide a comparative analysis against six state-of-the-art methods. We examine the calibration problem from two alternative geometrical interpretations, called ‘hand–eye’ and ‘robot-world-hand–eye’, respectively. The study analyses the effects of specifying the objective function as pose error or reprojection error minimization problem. We provide three real and three simulated datasets with rendered images as part of the study. In addition, we propose a robotic arm error modeling approach to be used along with the simulated datasets for generating a realistic response. The tests on simulated data are performed in both ideal cases and with pseudo-realistic robotic arm pose and visual noise. Our methods show significant improvement and robustness on many metrics in various scenarios compared to state-of-the-art methods. Full article
(This article belongs to the Special Issue Intelligent Systems and Sensors for Robotics)
Show Figures

Figure 1

Figure 1
<p>Formulations relating geometrical transformation for calibration; (<b>a</b>) hand–eye calibration; (<b>b</b>) robot-world-hand–eye Calibration.</p>
Full article ">Figure 2
<p>An example of the setup for acquiring the datasets; (<b>a</b>) robotic arm moving in the workspace; (<b>b</b>) cameras and Lenses for data acquisition.</p>
Full article ">Figure 3
<p>Example of captured images from the dataset 1 through 3; (<b>a</b>) checkerboard from dataset 1; (<b>b</b>) checkerboard from dataset 2; (<b>c</b>) ChArUco from dataset 3.</p>
Full article ">Figure 4
<p>Example of rendered images for simulated datasets from the datasets 4 through 6; (<b>a</b>) excerpt from dataset 4; (<b>b</b>) excerpt from dataset 5; (<b>c</b>) excerpt from dataset 6.</p>
Full article ">Figure 5
<p>Flowchart of the orientation noise modelling approach.</p>
Full article ">Figure 6
<p>Probability distributions functions; (<b>a</b>) the measured position error from the robotic arm; (<b>b</b>) the modeled orientation error for the robotic arm.</p>
Full article ">Figure 7
<p>Metric error results for Dataset 5 with constant robot pose noise; (<b>a</b>) mean rotation error; (<b>b</b>) mean translation error; (<b>c</b>) reprojection error; (<b>d</b>) absolute rotation error against ground truth; (<b>e</b>) absolute translation error against ground truth.</p>
Full article ">Figure 7 Cont.
<p>Metric error results for Dataset 5 with constant robot pose noise; (<b>a</b>) mean rotation error; (<b>b</b>) mean translation error; (<b>c</b>) reprojection error; (<b>d</b>) absolute rotation error against ground truth; (<b>e</b>) absolute translation error against ground truth.</p>
Full article ">
22 pages, 2145 KiB  
Article
AMiCUS—A Head Motion-Based Interface for Control of an Assistive Robot
by Nina Rudigkeit and Marion Gebhard
Sensors 2019, 19(12), 2836; https://doi.org/10.3390/s19122836 - 25 Jun 2019
Cited by 26 | Viewed by 4923
Abstract
Within this work we present AMiCUS, a Human-Robot Interface that enables tetraplegics to control a multi-degree of freedom robot arm in real-time using solely head motion, empowering them to perform simple manipulation tasks independently. The article describes the hardware, software and signal processing [...] Read more.
Within this work we present AMiCUS, a Human-Robot Interface that enables tetraplegics to control a multi-degree of freedom robot arm in real-time using solely head motion, empowering them to perform simple manipulation tasks independently. The article describes the hardware, software and signal processing of AMiCUS and presents the results of a volunteer study with 13 able-bodied subjects and 6 tetraplegics with severe head motion limitations. As part of the study, the subjects performed two different pick-and-place tasks. The usability was assessed with a questionnaire. The overall performance and the main control elements were evaluated with objective measures such as completion rate and interaction time. The results show that the mapping of head motion onto robot motion is intuitive and the given feedback is useful, enabling smooth, precise and efficient robot control and resulting in high user-acceptance. Furthermore, it could be demonstrated that the robot did not move unintendedly, giving a positive prognosis for safety requirements in the framework of a certification of a product prototype. On top of that, AMiCUS enabled every subject to control the robot arm, independent of prior experience and degree of head motion limitation, making the system available for a wide range of motion impaired users. Full article
(This article belongs to the Special Issue Assistance Robotics and Biosensors 2019)
Show Figures

Figure 1

Figure 1
<p>Kinematics of the cervical spine. From the kinematic point of view, the human cervical spine can be approximated by a ball joint. That means, every motion can be divided into single rotations around three orthogonal axes that intersect in one point. This point, that is, the center of rotation, roughly coincides with the location of the thyroid gland. As a result, a rigid body placed onto a human head moves on a spherical surface during head motion.</p>
Full article ">Figure 2
<p>Coordinate systems of the AMiCUS system. Degrees of Freedom (DOFs) of the same color are controlled by the same head DOF. The zero orientation of the head coordinate system depends on whether the cursor or the robot is controlled. During robot control, the head coordinate system is denoted by <math display="inline"><semantics> <mrow> <msup> <mrow/> <msub> <mi>h</mi> <mi>r</mi> </msub> </msup> <mi mathvariant="bold">α</mi> <mo>=</mo> <msup> <mo> </mo> <msub> <mi>h</mi> <mi>r</mi> </msub> </msup> <mrow> <mo>(</mo> <mi>φ</mi> <mo>,</mo> <mi>ϑ</mi> <mo>,</mo> <mi>ψ</mi> <mo>)</mo> </mrow> <msup> <mrow/> <mi>T</mi> </msup> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msup> <mrow/> <msub> <mi>h</mi> <mi>c</mi> </msub> </msup> <mi mathvariant="bold">α</mi> <mo>=</mo> <msup> <mo> </mo> <msub> <mi>h</mi> <mi>c</mi> </msub> </msup> <mrow> <mo>(</mo> <mi>φ</mi> <mo>,</mo> <mi>ϑ</mi> <mo>,</mo> <mi>ψ</mi> <mo>)</mo> </mrow> <msup> <mrow/> <mi>T</mi> </msup> </mrow> </semantics></math> during cursor control.</p>
Full article ">Figure 3
<p>Mapping of head DOFs onto robot DOFs. Four different groups, that is, Gripper, Orientation, Vertical Plane and Horizontal Plane, are depicted. The user is able to switch between groups in order to control all DOFs of the robot.</p>
Full article ">Figure 4
<p>Robot control transfer function. A G<span class="html-small-caps">ompertz</span>-function is used as transfer function. The used parameters are <math display="inline"><semantics> <mrow> <msub> <mi>A</mi> <mo movablelimits="true" form="prefix">max</mo> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>δ</mi> <mo>=</mo> <mo>−</mo> <mn>30</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>=</mo> <mo>−</mo> <mn>10</mn> </mrow> </semantics></math>. The space between the dashed lines indicates the deadzone.</p>
Full article ">Figure 5
<p>Graphical User Interface during Robot Control Mode. The GUI displays an icon of the current robot group (top right), the image of the gripper camera (bottom right), an information line (bottom) and feedback about the current head angle given by the arrow (left). The square represents the deadzone in which the robot arm cannot be moved.</p>
Full article ">Figure 6
<p>Head Gesture. The gesture is displayed with its <math display="inline"><semantics> <mrow> <msup> <mrow/> <msub> <mi>h</mi> <mi>r</mi> </msub> </msup> <mi>ϑ</mi> </mrow> </semantics></math>-angles over time <span class="html-italic">t</span>. Parameters: <math display="inline"><semantics> <msub> <mi>d</mi> <mo movablelimits="true" form="prefix">max</mo> </msub> </semantics></math> = amplitude, <span class="html-italic">w</span> = peak width, <math display="inline"><semantics> <msub> <mi>t</mi> <mi>c</mi> </msub> </semantics></math> = location of the peak center.</p>
Full article ">Figure 7
<p>Graphical User Interface during Cursor Control Mode. The GUI contains one Slide Button for each robot group. The dwell buttons in the top toolbar allow the user to perform several actions, such as pausing control, starting calibration routines or exiting the program.</p>
Full article ">Figure 8
<p>Slide Button. The following steps are necessary for successful activation: When the button is in its neutral state (<math display="inline"><semantics> <msub> <mi>S</mi> <mn>0</mn> </msub> </semantics></math>) the mouse cursor has to dwell in the button (<math display="inline"><semantics> <msub> <mi>S</mi> <mn>1</mn> </msub> </semantics></math>) until visual and acoustic feedback occurs. Then, the button has to be moved to the right along the rail (<math display="inline"><semantics> <msub> <mi>S</mi> <mn>2</mn> </msub> </semantics></math>). At the end of the rail visual and acoustic feedback is given. Next, the button has to be moved to the left along the rail (<math display="inline"><semantics> <msub> <mi>S</mi> <mn>3</mn> </msub> </semantics></math>). When the button reaches the initial position (<math display="inline"><semantics> <msub> <mi>S</mi> <mn>4</mn> </msub> </semantics></math>), it is activated and the assigned action is performed.</p>
Full article ">Figure 9
<p>Experimental setup of the predefined task. The subjects were clearly instructed how to move the robot. Movements 1–3 had to be performed in the Vertical Plane group, movements 4–6 in the Horizontal Plane group. After movement 6, the subjects had to perform one 90<math display="inline"><semantics> <msup> <mrow/> <mo>∘</mo> </msup> </semantics></math>-rotation around each rotation axis.</p>
Full article ">Figure 10
<p>Experimental setup of the complex task. The users had to find their own control strategy to solve the task.</p>
Full article ">Figure 11
<p>Completion rate of the complex task. There was no statistical difference between the users with full and restricted ROM. The overall completion rate of the complex task was <math display="inline"><semantics> <mrow> <mn>72.2</mn> </mrow> </semantics></math>% (n = 18).</p>
Full article ">Figure 12
<p>Comparison between Slide Button and Head Gesture. The performance of the Slide Button and the Head Gesture was evaluated in terms of success rate and activation time.</p>
Full article ">Figure 13
<p>Control speed evaluation. Mean and standard deviation of subjective velocities during gripping, linear motion and rotations (n = 18).</p>
Full article ">
13 pages, 2256 KiB  
Article
Real-time Precise Point Positioning with a Xiaomi MI 8 Android Smartphone
by Bo Chen, Chengfa Gao, Yongsheng Liu and Puyu Sun
Sensors 2019, 19(12), 2835; https://doi.org/10.3390/s19122835 - 25 Jun 2019
Cited by 97 | Viewed by 8791
Abstract
The Global Navigation Satellite System (GNSS) positioning technology using smartphones can be applied to many aspects of mass life, and the world’s first dual-frequency GNSS smartphone Xiaomi MI 8 represents a new trend in the development of GNSS positioning technology with mobile phones. [...] Read more.
The Global Navigation Satellite System (GNSS) positioning technology using smartphones can be applied to many aspects of mass life, and the world’s first dual-frequency GNSS smartphone Xiaomi MI 8 represents a new trend in the development of GNSS positioning technology with mobile phones. The main purpose of this work is to explore the best real-time positioning performance that can be achieved on a smartphone without reference stations. By analyzing the GNSS raw measurements, it is found that all the three mobile phones tested have the phenomenon that the differences between pseudorange observations and carrier phase observations are not fixed, thus a PPP (precise point positioning) method is modified accordingly. Using a Xiaomi MI 8 smartphone, the modified real-time PPP positioning strategy which estimates two clock biases of smartphone was applied. The results show that using multi-GNSS systems data can effectively improve positioning performance; the average horizontal and vertical RMS positioning error are 0.81 and 1.65 m respectively (using GPS, BDS, and Galileo data); and the time required for each time period positioning errors in N and E directions to be under 1 m is less than 30s. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>Mobile phones and geodetic receivers at the site of first and third dataset. There is a plastic board on the top of the middle tripod, and smartphones are placed on the board.</p>
Full article ">Figure 2
<p>The control point named GE01 at Southeast University Jiulonghu Campus. The precise coordinates of this control point are obtained by a geodesic receiver through the network RTK positioning method. (<b>a</b>) Distant view of the control point; (<b>b</b>) Close-up of the control point; (<b>c</b>) A Xiaomi MI 8 smartphone placed on the control point.</p>
Full article ">Figure 3
<p>The pseudorange and carrier phase observations (BDS 02 satellite) of a geodetic receiver and a Huawei P10 smartphone. (<b>a</b>) The observations of a geodetic receiver; (<b>b</b>) The observations of a Huawei P10 smartphone. The actual acquired carrier phase of smartphones is a set of data that is cumulatively incremented from 0 m at the beginning of observation, thus the values are small. In this figure, the mobile phone carrier phase values are added a constant.</p>
Full article ">Figure 4
<p>The observations (Galileo 03 satellite) and its change rate of a Xiaomi MI 8 smartphone. (<b>a</b>) The raw observations; (<b>b</b>) The change rate of observations.</p>
Full article ">Figure 5
<p>The differences between pseudorange change rate and carrier phase change rate of all GNSS satellites of a Huawei P10 smartphone and a Xiaomi MI 8 smartphone (L1/E1 frequency). (<b>a</b>) Huawei P10 smartphone; (<b>b</b>) Xiaomi MI 8 smartphone (L1/E1 frequency).</p>
Full article ">Figure 6
<p>The North (N), East (E), and Up (U) direction errors of precise point positioning (PPP) positioning results of the five time periods. (<b>a</b>) 1st time period (G + C + E); (<b>b</b>) 2nd time period (G + C + E); (<b>c</b>) 3rd time period (G + C + E); (<b>d</b>) 4th time period (G + C + E); (<b>e</b>) 5th time period (G + C + E); (<b>f</b>) 1st time period (GPS); (<b>g</b>) 2nd time period (GPS); (<b>h</b>) 4th time period (GPS). The “G”, “C” and “E” denote GPS, BDS, and Galileo.</p>
Full article ">
11 pages, 3297 KiB  
Article
Nano-Cracked Strain Sensor with High Sensitivity and Linearity by Controlling the Crack Arrangement
by Hyunsuk Jung, Chan Park, Hyunwoo Lee, Seonguk Hong, Hyonguk Kim and Seong J. Cho
Sensors 2019, 19(12), 2834; https://doi.org/10.3390/s19122834 - 25 Jun 2019
Cited by 30 | Viewed by 7697
Abstract
Studies on wearable sensors that monitor various movements by attaching them to a body have received considerable attention. Crack-based strain sensors are more sensitive than other sensors. Owing to their high sensitivity, these sensors have been investigated for measuring minute deformations occurring on [...] Read more.
Studies on wearable sensors that monitor various movements by attaching them to a body have received considerable attention. Crack-based strain sensors are more sensitive than other sensors. Owing to their high sensitivity, these sensors have been investigated for measuring minute deformations occurring on the skin, such as pulse. However, existing studies have limited sensitivity at low strain range and nonlinearity that renders any calibration process complex and difficult. In this study, we propose a pre-strain and sensor-extending process to improve the sensitivity and linearity of the sensor. By using these pre-strain and sensor-extending processes, we were able to control the morphology and alignment of cracks and regulate the sensitivity and linearity of the sensor. Even if the sensor was fabricated in the same manner, the sensor that involved the pre-strain and extending processes had a sensitivity 100 times greater than normal sensors. Thus, our crack-based strain sensor had high sensitivity (gauge factor > 5000, gauge factor (GF = (△R/R0)/ε), linearity, and low hysteresis at low strain (<1% strain). Given its high sensing performance, the sensor can be used to measure micro-deformation, such as pulse wave and voice. Full article
(This article belongs to the Section Sensor Materials)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Fabrication process of the crack-based strain sensor. (<b>b</b>) Schematic of the four sensor types and their pre-processing prior to measurement. (<b>c</b>) Relative change in resistance of the four sensor types under 1% strain.</p>
Full article ">Figure 2
<p>(<b>a</b>) Schematic of the sensor-evaluating system. (<b>b</b>) Schematic of the sensor.</p>
Full article ">Figure 3
<p>(<b>a</b>) Schematic of the pre-strained sensor and its scanning electron microscope (SEM) images. (<b>b</b>) Previous strain’s resistance value affects the following strain’s resistance value, (<b>c</b>) Shortcoming of pre-strained sensor, (<b>d</b>) Schematic of the sensor-extending process, its concept and SEM images of the extended sensor, (<b>e</b>) Strain-relative change in resistance graph according to sensor-extending, (<b>f</b>) Shortcoming of sensor-extending, high initial resistance.</p>
Full article ">Figure 4
<p>(<b>a</b>) Relative resistance change of the four sensor types under 1% strain. (<b>b</b>) Gauge factor at 0.6% and 1% strain according to the pre-strain percentage under 1% sensor extension. (<b>c</b>) Relative resistance change under 20% strain: red line box, pre-strain 20% and sensor-extending 2% under 1% strain; orange dotted line box, pre-strain 20% and sensor-extending 3% under 1% strain. (<b>d</b>) Relative resistance change under 10% strain: red line box, pre-strain 10% and sensor-extending 2% under 1% strain; orange dotted line box, pre-strain 10% and sensor-extending 3% under 1% strain.</p>
Full article ">Figure 5
<p>(<b>a</b>) Relative resistance change during the cycling at the 0–1% strain. (<b>b</b>) Relative resistance change during 2000 cycles (inlet: resistance change at specific cycles). (<b>c</b>) High GF strain sensor after the pre-strain and sensor-extending processes. (<b>d</b>) Linear strain sensor after the pre-strain and sensor-extending processes.</p>
Full article ">Figure 6
<p>(<b>a</b>) Schematic of the sensor attached on a speaker, generating 60 BPM metronome. (<b>b</b>–<b>c</b>) Difference in the relative resistance change between with and without the hybrid process. (<b>d</b>) Relative resistance change when PU bead loading and unloading. (<b>e</b>) Relative resistance change of the pulse signal (inlet: photograph of the attached spot).</p>
Full article ">
16 pages, 4898 KiB  
Article
Improving Optical Measurements: Non-Linearity Compensation of Compact Charge-Coupled Device (CCD) Spectrometers
by Münevver Nehir, Carsten Frank, Steffen Aßmann and Eric P. Achterberg
Sensors 2019, 19(12), 2833; https://doi.org/10.3390/s19122833 - 25 Jun 2019
Cited by 20 | Viewed by 10813
Abstract
Charge-coupled device (CCD) spectrometers are widely used as detectors in analytical laboratory instruments and as sensors for in situ optical measurements. However, as the applications become more complex, the physical and electronic limits of the CCD spectrometers may restrict their applicability. The errors [...] Read more.
Charge-coupled device (CCD) spectrometers are widely used as detectors in analytical laboratory instruments and as sensors for in situ optical measurements. However, as the applications become more complex, the physical and electronic limits of the CCD spectrometers may restrict their applicability. The errors due to dark currents, temperature variations, and blooming can be readily corrected. However, a correction for uncertainty of integration time and wavelength calibration is typically lacking in most devices, and detector non-linearity may distort the signal by up to 5% for some measurements. Here, we propose a simple correction method to compensate for non-linearity errors in optical measurements where compact CCD spectrometers are used. The results indicate that the error due to the non-linearity of a spectrometer can be reduced from several hundred counts to about 40 counts if the proposed correction function is applied. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Schematic of a Hamamatsu C10082CA miniaturized spectrometer [<a href="#B1-sensors-19-02833" class="html-bibr">1</a>].</p>
Full article ">Figure 2
<p>Simplified diagram of the basic internal circuit that connects the charge-coupled device (CCD) line with the interface (RS232 or USB) of the computer. The CCD line is connected via an amplifier (amp) with the analog-to-digital converter (ADC). The latter is either part of a microcontroller or connected to a microcontroller that communicates with the computer.</p>
Full article ">Figure 3
<p>Diagram showing how all possible CCD output values (after amplification; X-axis) are mapped onto the value range of the analog-to-digital converter (ADC; Y-axis). As the zero value of the CCD (and the amplifier) may vary with time, temperature, and/or other factors, this zero value is mapped to a value that lies well inside the value range of the ADC.</p>
Full article ">Figure 4
<p>Light intensity spectrum of the white LED used in this study.</p>
Full article ">Figure 5
<p>Experimental setup with instrumentation in the cool box for the quantification of the non-linearity of the miniature spectrometer.</p>
Full article ">Figure 6
<p>Qualitative illustration of the non-linearities of the CCD detector. The ratios <span class="html-italic">f</span> of <span class="html-italic">I</span><b><sub>1</sub></b>(λ) to <span class="html-italic">I</span><b><sub>i</sub></b>(λ) are plotted versus the wavelengths (nm) for several integration times. In theory, the f ratios should be constant for all wavelengths of a spectrum. The dashed red lines are the expected characteristics for two arbitrarily chosen integration times, the black lines are the observed characteristics for several integration times (compare also with Figure 11).</p>
Full article ">Figure 7
<p>Pixel intensities (counts) versus integration time (ms). A single line represents data from one pixel. Data from 200 representative pixels (equally spaced between 400–800 nm) are shown. All data lines appear to be linear up to 50,000 counts and then deviate strongly from the ideal line.</p>
Full article ">Figure 8
<p>ADC offset in counts for all pixels of the CCD line. The black dots were for an enabled light source (illuminated) and the red dots were determined for a disabled light source (dark).</p>
Full article ">Figure 9
<p>Differences between corrected and referential intensities for selected polynomials (<span class="html-italic">I</span><b><sub>corr</sub></b> − <span class="html-italic">I</span><b><sub>raw</sub></b>; Y-axis) versus integration time (ms). The panel above shows a wider range of intensity counts for the first-degree polynomial function, while the panel below shows a narrower range for the second-, third-, and ninth-degree functions.</p>
Full article ">Figure 10
<p>Differences between corrected and referential raw intensities in terms of absorbance error values (Y-axis) at different absorbance levels (X-axis).</p>
Full article ">Figure 11
<p>Illustration of the linear dependence of the corrected intensities on the integration time. The ratios <span class="html-italic">f</span> of <span class="html-italic">I</span><b><sub>1</sub></b>(λ) to <span class="html-italic">I</span><b><sub>i</sub></b>(λ) are plotted versus the wavelengths (nm) for several integration times. The correction is valid for different intensities at different integration times and wavelengths and thus verifies the applicability of the correction function. The dashed red lines are expected characteristics for two arbitrarily chosen integration times, the black lines are the corrected observed characteristics for several integration times (compare also with <a href="#sensors-19-02833-f006" class="html-fig">Figure 6</a>).</p>
Full article ">Figure 12
<p>Relative detector noise as a function of intensity (counts) of the CCD detector given as 1 σ standard deviation (%).</p>
Full article ">
26 pages, 1046 KiB  
Article
PALOT: Profiling and Authenticating Users Leveraging Internet of Things
by Pantaleone Nespoli, Mattia Zago, Alberto Huertas Celdrán, Manuel Gil Pérez, Félix Gómez Mármol and Félix J. García Clemente
Sensors 2019, 19(12), 2832; https://doi.org/10.3390/s19122832 - 25 Jun 2019
Cited by 15 | Viewed by 5663
Abstract
Continuous authentication was introduced to propose novel mechanisms to validate users’ identity and address the problems and limitations exposed by traditional techniques. However, this methodology poses several challenges that remain unsolved. In this paper, we present a novel framework, PALOT, that leverages IoT [...] Read more.
Continuous authentication was introduced to propose novel mechanisms to validate users’ identity and address the problems and limitations exposed by traditional techniques. However, this methodology poses several challenges that remain unsolved. In this paper, we present a novel framework, PALOT, that leverages IoT to provide context-aware, continuous and non-intrusive authentication and authorization services. To this end, we propose a formal information system model based on ontologies, representing the main source of knowledge of our framework. Furthermore, to recognize users’ behavioral patterns within the IoT ecosystem, we introduced a new module called “confidence manager”. The module is then integrated into an extended version of our early framework architecture, IoTCAF, which is consequently adapted to include the above-mentioned component. Exhaustive experiments demonstrated the efficacy, feasibility and scalability of the proposed solution. Full article
(This article belongs to the Special Issue Sensor Systems for Internet of Things)
Show Figures

Figure 1

Figure 1
<p>Main components for continuous authentication in IoT.</p>
Full article ">Figure 2
<p>Set of ontologies making up the continuous authentication framework: Location, Person, and IoT Devices ontologies.</p>
Full article ">Figure 3
<p>Overview of the PALOT new confidence manager module.</p>
Full article ">Figure 4
<p>Overview of the PALOT multilayered architecture.</p>
Full article ">Figure 5
<p>Time required by the PALOT decision modules.</p>
Full article ">Figure 6
<p>Resource consumption of the proposed framework with respect to number of people and events.</p>
Full article ">Figure 7
<p>Training and testing times of the proposed system.</p>
Full article ">Figure 8
<p>Confidence evolution for a target user.</p>
Full article ">Figure 9
<p>Confidence evolution across the dataset.</p>
Full article ">Figure 10
<p>Confidence with regards to the quality of the dataset.</p>
Full article ">
16 pages, 8221 KiB  
Article
3D Pose Detection of Closely Interactive Humans Using Multi-View Cameras
by Xiu Li, Zhen Fan, Yebin Liu, Yipeng Li and Qionghai Dai
Sensors 2019, 19(12), 2831; https://doi.org/10.3390/s19122831 - 25 Jun 2019
Cited by 18 | Viewed by 6350
Abstract
We propose a method to automatically detect 3D poses of closely interactive humans from sparse multi-view images at one time instance. It is a challenging problem due to the strong partial occlusion and truncation between humans and no tracking process to provide priori [...] Read more.
We propose a method to automatically detect 3D poses of closely interactive humans from sparse multi-view images at one time instance. It is a challenging problem due to the strong partial occlusion and truncation between humans and no tracking process to provide priori poses information. To solve this problem, we first obtain 2D joints in every image using OpenPose and human semantic segmentation results from Mask R-CNN. With the 3D joints triangulated from multi-view 2D joints, a two-stage assembling method is proposed to select the correct 3D pose from thousands of pose seeds combined by joint semantic meanings. We further present a novel approach to minimize the interpenetration between human shapes with close interactions. Finally, we test our method on multi-view human-human interaction (MHHI) datasets. Experimental results demonstrate that our method achieves high visualized correct rate and outperforms the existing method in accuracy and real-time capability. Full article
(This article belongs to the Special Issue Multi-Modal Sensors for Human Behavior Monitoring)
Show Figures

Figure 1

Figure 1
<p>3D pose detection result of multiple markerless persons with close interactions from multi-view images at one time instance. Visualized results are shown as 2D poses in every view image reprojected from 3D pose.</p>
Full article ">Figure 2
<p>Method Overview. With multi-view images as input (<b>a</b>), we first detect the 2D joints (<b>b</b>) and human semantic segmentation (<b>c</b>) results with learning method in every view image. After triangulating 2D joints to 3D, we could get all 3D pose seeds (<b>d</b>) with all connection of semantic neighbor joints. Then we reduce 3D pose seeds number through pre-assembling process (<b>e</b>). At last, through pose-assembling optimization, the final 3D poses (<b>g</b>) could be obtained combined with SMPL models (<b>f</b>) fitting from pre-assembling pose seeds.</p>
Full article ">Figure 3
<p>Symmetry Ankle Joints Grouping.</p>
Full article ">Figure 4
<p>Comparison results for 3D pose detection without (<b>Middle</b>) and with (<b>Right</b>) interpenetration constrain O.</p>
Full article ">Figure 5
<p>Comparison results for 3D pose detection without (<b>Middle</b>) and with (<b>Right</b>) symmetry parts gray term R.</p>
Full article ">Figure 6
<p>3D pose detection results on MHHI datasets. The first four columns are the 2D pose results reprojected from 3D pose result. The last column is the 3D pose estimation result of our method.</p>
Full article ">Figure 7
<p>Comparison results with voting method. The first two columns are the pose detection results using voting method. Results of our method are shown in the last two columns.</p>
Full article ">Figure 8
<p>Positions of 12 Camera Views.</p>
Full article ">Figure 9
<p>Results of different view groups.</p>
Full article ">Figure 10
<p>Results of different view numbers.</p>
Full article ">
18 pages, 5542 KiB  
Article
Analyses of Time Series InSAR Signatures for Land Cover Classification: Case Studies over Dense Forestry Areas with L-Band SAR Images
by Hye-Won Yun, Jung-Rack Kim, Yun-Soo Choi and Shih-Yuan Lin
Sensors 2019, 19(12), 2830; https://doi.org/10.3390/s19122830 - 25 Jun 2019
Cited by 16 | Viewed by 3382
Abstract
As demonstrated in prior studies, InSAR holds great potential for land cover classification, especially considering its wide coverage and transparency to climatic conditions. In addition to features such as backscattering coefficient and phase coherence, the temporal migration in InSAR signatures provides information that [...] Read more.
As demonstrated in prior studies, InSAR holds great potential for land cover classification, especially considering its wide coverage and transparency to climatic conditions. In addition to features such as backscattering coefficient and phase coherence, the temporal migration in InSAR signatures provides information that is capable of discriminating types of land cover in target area. The exploitation of InSAR signatures was expected to provide merits to trace land cover change in extensive areas; however, the extraction of suitable features from InSAR signatures was a challenging task. Combining time series amplitudes and phase coherences through linear and nonlinear compressions, we showed that the InSAR signatures could be extracted and transformed into reliable classification features for interpreting land cover types. The prototype was tested in mountainous areas that were covered with a dense vegetation canopy. It was demonstrated that InSAR time series signature analyses reliably identified land cover types and also recognized tracing of temporal land cover change. Based on the robustness of the developed scheme against the temporal noise components and the availability of advanced spatial and temporal resolution SAR data, classification of finer land cover types and identification of stable scatterers for InSAR time series techniques can be expected. The advanced spatial and temporal resolution of future SAR assets combining the scheme in this study can be applicable for various important applications including global land cover changes monitoring. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>Employed InSAR time sequence pairs and the packet compositions in Mt. Baekdu (<b>a</b>) and Uljin (<b>b</b>).</p>
Full article ">Figure 2
<p>The land cover classifications with PCA/ML over Mt. Baekdu from (<b>a</b>) HH1 packet during 2009/07–2010/03; (<b>b</b>) HH2 packet during 2010/03–2010/10; (<b>c</b>) HH3 packet during 2010/10–2011/03; and (<b>d</b>) HH-HV packet during 2010/06–2010/10.</p>
Full article ">Figure 3
<p>The land cover classifications with k-PCA plus SVM scheme over Mt. Baekdu from (<b>a</b>) HH1 packet during 2009/07–2010/03; (<b>b</b>) HH2 packet during 2010/03–2010/10; (<b>c</b>) HH3 packet during 2010/10–2011/03; (<b>d</b>) HH-HV packet during 2010/06–2010/10, and (<b>e</b>–<b>f</b>) LANDSAT sub-images and corresponding HH1 packet classifications.</p>
Full article ">Figure 4
<p>(<b>a</b>) The temporal phase coherence migrations of land cover types in HH2 packet; (<b>b</b>) The temporal patterns of transformed k-PCA components; (<b>c</b>) Wind components of pairs in HH2 packet, those were extracted from the ECMWF model.</p>
Full article ">Figure 5
<p>Land cover separabilities by Jeffreys-Matusita distances. It measured over simple phase coherence time series, PCA components (1 to 3) and k-PCA of HH1, HH2 HH3 and HH_HV packets.</p>
Full article ">Figure 6
<p>NDVI (<b>a</b>) and EVI (<b>b</b>) map over Mt. Baekdu target area from MODIS land products in HH1 period (<b>left</b>), HH2 (<b>middle</b>) and HH3 (<b>right</b>).</p>
Full article ">Figure 7
<p>MODIS vegetation indexes and altitudes within classified land cover types by centered PCA (<b>a</b>) and k-PCA (<b>b</b>) schemes over Mt. Baekdu with altitude, NDVI and EVI.</p>
Full article ">Figure 8
<p>The detailed land cover classifications over Mt. Baekdu from EO-1 ALi optical image SVM processing (<b>a</b>) and HH1 packet with k-PCA plus SVM analysis (<b>b</b>).</p>
Full article ">Figure 9
<p>Landsat 8 visual channel image (<b>a</b>) and the land cover classification map (<b>b</b>) over Uljin target area. (<b>c</b>) and (<b>d</b>) respectively illustrate the land cover classifications conducted using k-PCA plus SVM scheme based on HH1 packet during 2009/07–2010/03 and HH2 packet during 2010/10–2011/03.</p>
Full article ">
17 pages, 19875 KiB  
Article
A Dark Target Detection Method Based on the Adjacency Effect: A Case Study on Crack Detection
by Li Yu, Yugang Tian and Wei Wu
Sensors 2019, 19(12), 2829; https://doi.org/10.3390/s19122829 - 25 Jun 2019
Cited by 7 | Viewed by 3893
Abstract
Dark target detection is important for engineering applications but the existing methods do not consider the imaging environment of dark targets, such as the adjacency effect. The adjacency effect will affect the quantitative applications of remote sensing, especially for high contrast images and [...] Read more.
Dark target detection is important for engineering applications but the existing methods do not consider the imaging environment of dark targets, such as the adjacency effect. The adjacency effect will affect the quantitative applications of remote sensing, especially for high contrast images and images with ever-increasing resolution. Further, most studies have focused on how to eliminate the adjacency effect and there is almost no research about the application of the adjacency effect. However, the adjacency effect leads to some unique characteristics for the dark target surrounded by a bright background. This paper utilizes these characteristics to assist in the detection of the dark object, and the low-high threshold detection strategy and the adaptive threshold selection method under the assumption of Gaussian distribution are designed. Meanwhile, preliminary case experiments are carried out on the crack detection of concrete slope protection. Finally, the experiment results show that it is feasible to utilize the adjacency effect for dark target detection. Full article
(This article belongs to the Special Issue Advance and Applications of RGB Sensors)
Show Figures

Figure 1

Figure 1
<p>Two typical high-contrast images. And (<b>a</b>–<b>d</b>) are four example parts of the text image and the crack image.</p>
Full article ">Figure 2
<p>Intensity (Lightness) profiles. And (<b>a</b>) is the intensity profile along the line in <a href="#sensors-19-02829-f001" class="html-fig">Figure 1</a>a. (<b>b</b>) is the intensity profile along the line in <a href="#sensors-19-02829-f001" class="html-fig">Figure 1</a>b. (<b>c</b>) is the intensity profile along the line in <a href="#sensors-19-02829-f001" class="html-fig">Figure 1</a>c. (<b>d</b>) is the intensity profile along the line in <a href="#sensors-19-02829-f001" class="html-fig">Figure 1</a>d.</p>
Full article ">Figure 3
<p>High-contrast scenarios in GF-2 and Worldview-2 images. And (<b>a</b>–<b>d</b>) are four example parts of the GF-2 image and the Worldview-2 image.</p>
Full article ">Figure 4
<p>Intensity (Lightness) profiles (only red band is displayed). And (<b>a</b>) is the intensity profile along the line in <a href="#sensors-19-02829-f003" class="html-fig">Figure 3</a>a. (<b>b</b>) is the intensity profile along the line in <a href="#sensors-19-02829-f003" class="html-fig">Figure 3</a>b. (<b>c</b>) is the intensity profile along the line in <a href="#sensors-19-02829-f003" class="html-fig">Figure 3</a>c. (<b>d</b>) is the intensity profile along the line in <a href="#sensors-19-02829-f003" class="html-fig">Figure 3</a>d.</p>
Full article ">Figure 5
<p>Concept map of the correspondence between the dark target’s location and its intensity value.</p>
Full article ">Figure 6
<p>Concept map of Low-High threshold detection strategy. And (<b>a</b>) is the conceptual map consists of noise, dark target and bright background. (<b>b</b>) is the conceptual map of detected result using a low threshold. (<b>c</b>) is the conceptual map of detected result using a high threshold. (<b>d</b>) is the conceptual map of rough detection result using the low-high threshold detection strategy.</p>
Full article ">Figure 7
<p>The definition of separate unit.</p>
Full article ">Figure 8
<p>The curve shape of function Gaussian probability density function and its first derivative formula function.</p>
Full article ">Figure 9
<p>Regular expansion joints on the surface of concrete slope protection.</p>
Full article ">Figure 10
<p>Overall concrete slope protection project.</p>
Full article ">Figure 11
<p>Detection accuracy for expansion joints of all images. And (<b>a</b>) is the Precision curves using the proposed method, the canny_morphology method, and the SWT Algorithm. (<b>b</b>) is the Recall curves using the proposed method, the canny_morphology method, and the SWT Algorithm. (<b>c</b>) is the F-measure curves of using the proposed method, the canny_morphology method, and the SWT Algorithm.</p>
Full article ">Figure 12
<p>(<b>a</b>–<b>d</b>) are four typical partial examples of expansion joint detection. Columns from left to right are the original images, manually drawn sketches, rough detection results using the proposed method, accurate detection results using the proposed method, detection results using the canny-morphology method, rough detection results using the SWT algorithm and accurate detection results using the SWT algorithm.</p>
Full article ">
15 pages, 952 KiB  
Article
An Evaluation Method of Safe Driving for Senior Adults Using ECG Signals
by Dong-Woo Koh and Sang-Goog Lee
Sensors 2019, 19(12), 2828; https://doi.org/10.3390/s19122828 - 25 Jun 2019
Cited by 11 | Viewed by 4012
Abstract
The elderly are more susceptible to stress than younger people. In particular, heart palpitations are one of the causes of heart failure, which can lead to serious accidents. To prevent heart palpitations, we have devised the Safe Driving Intensity (SDI) and Cardiac Reaction [...] Read more.
The elderly are more susceptible to stress than younger people. In particular, heart palpitations are one of the causes of heart failure, which can lead to serious accidents. To prevent heart palpitations, we have devised the Safe Driving Intensity (SDI) and Cardiac Reaction Time (CRT) as new methods of estimating the correlations between effects on the driver’s heart and the movement of a vehicle. In SDI measurement, recommended acceleration value of vehicle for safe driving is inferred from the suggested correlation algorithm using machine learning. A higher SDI value than other people means less pressure on the heart. CRT is an estimated value of the occurring time of heart palpitations caused by stressful driving. In particular, it is proved by SDI that elderly subjects tend to overestimate their driving abilities in personal assessment questionnaires. Furthermore, we validated our SDI using other general statistical methods. When comparing the results using a t-test, we obtained reliable results for the equivalent variance. Our results can be used as a basis for evaluating elderly people’s driving ability, as well as allowing for the implementation of a personalized safe driving system for the elderly. Full article
(This article belongs to the Special Issue Sensors for Biopotential, Physiological and Biomedical Monitoring)
Show Figures

Figure 1

Figure 1
<p>Illustration of the SDI and CRT, which are correlated interactions between (<b>A</b>) and (<b>B</b>). It is an important measure to evaluate safe driving ability. The portable Shimmer3 EEG unit(Shimmer, Dublin, Ireland) was used to monitor the ECG signal while driving (<b>C</b>).</p>
Full article ">Figure 2
<p>Experimental stages. Data were collected over the course of 11 minutes divided into three stages. During the main stage of the experiment (stressful driving; 500 seconds), the subjects were instructed to drive a vehicle on a difficult course with sharp turns. During the initial (before-driving) and final (after-driving) stages of the experiment, the subjects were instructed to relax.</p>
Full article ">Figure 3
<p>Sample ECG original signals that exhibit big differences between young and elderly subjects. The QRS amplitude of elderly people tends to be weaker than that of young people.</p>
Full article ">Figure 4
<p>#n points of maximum acceleration within MTW (30 s, Maximum Time Window). These moments were considered to be most stressful driving moment, which cause heart palpitations.</p>
Full article ">Figure 5
<p>Main algorithm of SDI/CRT.</p>
Full article ">Figure 6
<p>Procedure for obtaining the SDI and CRT. As shown here, the SDI is originated from the acceleration value and the CRT is the averaged distance between maximum the acceleration value within the MTW and each clustering heart palpitations RR interval value.</p>
Full article ">Figure 7
<p>Histogram of the heart palpitation summation at every acceleration value per 0.1 m/s<math display="inline"><semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics></math>. As shown here, the safe driving intensity value is 3.1 according to the highest gap obtained using this method.</p>
Full article ">
18 pages, 6685 KiB  
Article
Acoustic Impulsive Noise Based on Non-Gaussian Models: An Experimental Evaluation
by Danilo Pena, Carlos Lima, Matheus Dória, Luan Pena, Allan Martins and Vicente Sousa
Sensors 2019, 19(12), 2827; https://doi.org/10.3390/s19122827 - 25 Jun 2019
Cited by 13 | Viewed by 4253
Abstract
In general, acoustic channels are not Gaussian distributed neither are second-order stationary. Considering them for signal processing methods designed for Gaussian assumptions is inadequate, consequently yielding in poor performance of such methods. This paper presents an analysis for audio signal corrupted by impulsive [...] Read more.
In general, acoustic channels are not Gaussian distributed neither are second-order stationary. Considering them for signal processing methods designed for Gaussian assumptions is inadequate, consequently yielding in poor performance of such methods. This paper presents an analysis for audio signal corrupted by impulsive noise using non-Gaussian models. Audio samples are compared to the Gaussian, α -stable and Gaussian mixture models, evaluating the fitting by graphical and numerical methods. We discuss fitting properties as the window length and the overlap, finally concluding that the α -stable model has the best fit for all tested scenarios. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>Experimental measurement setup.</p>
Full article ">Figure 2
<p>Indoor Scenario: Auditorium acoustically isolated without external audio noise.</p>
Full article ">Figure 3
<p>Signal behavior in the indoor scenario without audio source.</p>
Full article ">Figure 4
<p>Signal behavior in the indoor scenario with a person moving and speaking.</p>
Full article ">Figure 5
<p>Hall Scenario: A mixed indoor/outdoor environment at auditorium hall.</p>
Full article ">Figure 6
<p>Signal behavior in the hall scenario without audio source.</p>
Full article ">Figure 7
<p>Signal behavior in the hall scenario with a person moving and speaking.</p>
Full article ">Figure 8
<p>Outdoor scenario: Outside the auditorium with noise coming from the outside environment.</p>
Full article ">Figure 9
<p>Signal behavior in the outdoor scenario without audio source.</p>
Full article ">Figure 10
<p>Signal behavior in the outdoor scenario with a person moving and speaking.</p>
Full article ">Figure 11
<p>Power Spectrum Density of measured data in all scenarios (tone of 1 kHz of audio source).</p>
Full article ">Figure 12
<p>Illustration of Gaussian mixture model fitting with two Gaussians (data from the outdoor scenario in a time window with severe impulsiveness).</p>
Full article ">Figure 13
<p>Illustration of PDF fitting for all models (data from the outdoor scenario in a time window with severe impulsiveness).</p>
Full article ">Figure 14
<p>Indoor scenario: Visual comparison among Gaussian, GMM and S<math display="inline"><semantics> <mi>α</mi> </semantics></math>S PDF fitting.</p>
Full article ">Figure 15
<p>Hall scenario: Visual comparison among Gaussian, GMM and S<math display="inline"><semantics> <mi>α</mi> </semantics></math>S PDF fitting.</p>
Full article ">Figure 16
<p>Outdoor scenario: Comparison between the data distribution and the estimated Gaussian and S<math display="inline"><semantics> <mi>α</mi> </semantics></math>S models.</p>
Full article ">Figure 17
<p>Variance of the estimated <math display="inline"><semantics> <mi>α</mi> </semantics></math> parameter versus the window length.</p>
Full article ">Figure 18
<p>Illustration of non-overlapped estimation.</p>
Full article ">Figure 19
<p>Illustration of overlapped estimation.</p>
Full article ">Figure 20
<p>Gaussian model estimation: Sample window of 1500 with 10% of windows overlapping.</p>
Full article ">Figure 21
<p>GMM model estimation: Sample window of 1500 with 10% of windows overlapping.</p>
Full article ">Figure 22
<p>S<math display="inline"><semantics> <mi>α</mi> </semantics></math>S model estimation: Sample window of 1500 with 10% of windows overlapping.</p>
Full article ">Figure 23
<p>Autocovariance of the measured data for all scenarios (with no audio source).</p>
Full article ">
22 pages, 5391 KiB  
Article
Conservative Sensor Error Modeling Using a Modified Paired Overbound Method and its Application in Satellite-Based Augmentation Systems
by Yan Zhang, Zhibin Xiao, Pengpeng Li, Xiaomei Tang and Gang Ou
Sensors 2019, 19(12), 2826; https://doi.org/10.3390/s19122826 - 24 Jun 2019
Cited by 5 | Viewed by 3471
Abstract
Conservative sensor error modeling is of great significance in the field of safety-of-life. At present, the overbound method has been widely used in areas such as satellite-based augmentation systems (SBASs) and ground-based augmentation systems (GBASs) that provide integrity service. It can effectively solve [...] Read more.
Conservative sensor error modeling is of great significance in the field of safety-of-life. At present, the overbound method has been widely used in areas such as satellite-based augmentation systems (SBASs) and ground-based augmentation systems (GBASs) that provide integrity service. It can effectively solve the difficulties of non-Gaussian and non-zero mean error modeling and confidence interval estimation of user position error. However, there is still a problem in that the model is too conservative and leads to the lack of availability. In order to further improve the availability of SBASs, an improved paired overbound method is proposed in this paper. Compared with the traditional method, the improved algorithm no longer requires the overbound function to conform to the characteristics of the probability distribution function, so that under the premise of ensuring the integrity of the system, the real error characteristics can be more accurately modeled and measured. The experimental results show that the modified paired overbound method can improve the availability of the system with a probability of about 99%. In view of the fact that conservative error modeling is more sensitive to large deviations, this paper analyzes the robustness of the improved algorithm in the case of abnormal data loss. The maximum deviation under a certain integrity risk is used to illustrate the effectiveness of the improved paired overbound method compared with the original method. Full article
(This article belongs to the Special Issue Wireless Networks for Real Time Communication)
Show Figures

Figure 1

Figure 1
<p>The empirical CDF and the overbound function obtained from the two paired overbound methods. The “MPO” in the legend stands for the modified paired overbound method, the “PO” in the legend stands for the paired overbound method.</p>
Full article ">Figure 2
<p>The trade-off between the mean and standard derivation of the Gaussian distribution when the error samples are overbound. The choice of a better overbounding function with higher availability may be different if the integrity risks are settled differently.</p>
Full article ">Figure 3
<p>The relationship between the overbound function and the actual error after 1, 2, and 5 convolutions. With the increase in convolutions, the differences between the overbound functions and the actual error distribution increase, and the results tends to reach the CDF of a step response.</p>
Full article ">Figure 4
<p>The variation in CDF difference with time (2017day121 UT12:00:00~24:00:00) for the two paired overbound methods, taking the result from PRN1 as an example.</p>
Full article ">Figure 5
<p>The rate of performance improvement with the MPO method compared to the PO method for all the three directions of the two paired overbound methods. The date is from 2017day117 to 2017day123. The results from PRN1 are taken as an example.</p>
Full article ">Figure 6
<p>The <span class="html-italic">EVPL</span> (99%) results of users in the ECAC (European Civil Aviation Conference) area in the case of 2017day121: (<b>a</b>) the <span class="html-italic">EVPL</span> (99%) results of MPO method; (<b>b</b>) the <span class="html-italic">EVPL</span> (99%) results of PO method</p>
Full article ">Figure 7
<p>The relative EVPL (REVPL, 99%) results for the date of 2017day121. It shows that in about 99% of the region, using the modified paired overbound method can achieve better EVPL results than the traditional one.</p>
Full article ">Figure 8
<p>The daily average EVPL (99%) results for all users in the service area.</p>
Full article ">Figure 9
<p>The probability that the MPO tolerable maximum deviation <span class="html-italic">X<sub>max_mpo</sub></span> is greater than the PO <span class="html-italic">X<sub>max_po</sub></span>. It shows that the MPO method has stronger robustness to “large deviations”, with a probability of more than 70%.</p>
Full article ">Figure A1
<p>(<b>a</b>) Tail overbound. (<b>b</b>) CDF overbound. (<b>c</b>) Paired overbound. (<b>d</b>) EMC paired overbound.</p>
Full article ">Figure A1 Cont.
<p>(<b>a</b>) Tail overbound. (<b>b</b>) CDF overbound. (<b>c</b>) Paired overbound. (<b>d</b>) EMC paired overbound.</p>
Full article ">Figure A2
<p>The procedure of parameter estimation. These parts are included: overbounding the error in a single direction, finding the worst user location in the range domain, and hypothesis testing of the results.</p>
Full article ">
17 pages, 24052 KiB  
Article
Microwave Staring Correlated Imaging Based on Unsteady Aerostat Platform
by Zheng Jiang, Yuanyue Guo, Jie Deng, Weidong Chen and Dongjin Wang
Sensors 2019, 19(12), 2825; https://doi.org/10.3390/s19122825 - 24 Jun 2019
Cited by 4 | Viewed by 3543
Abstract
Microwave staring correlated imaging (MSCI), with the technical capability of high-resolution imaging on relatively stationary targets, is a promising approach for remote sensing. For the purpose of continuous observation of a fixed key area, a tethered floating aerostat is often used as the [...] Read more.
Microwave staring correlated imaging (MSCI), with the technical capability of high-resolution imaging on relatively stationary targets, is a promising approach for remote sensing. For the purpose of continuous observation of a fixed key area, a tethered floating aerostat is often used as the carrying platform for MSCI radar system; however, its non-cooperative random motion of the platform caused by winds and its unbalance will result in blurred imaging, and even in imaging failure. This paper presents a method that takes into account the instabilities of the platform, combined with an adaptive variable suspension (AVS) and a position and orientation system (POS), which can automatically control the antenna beam orientation to the target area and measure dynamically the position and attitude of the stochastic radiation radar array, respectively. By analyzing the motion feature of aerostat platform, the motion model of the radar array is established, then its real-time position vector and attitude angles of each antenna can be represented; meanwhile the selection matrix of beam coverage is introduced to indicate the dynamic illumination of the radar antenna beam in the overall imaging area. Due to the low-speed discrete POS data, a curve-fitting algorithm can be used to estimate its accurate position vector and attitude of each antenna at each high-speed sampling time during the imaging period. Finally, the MSCI model based on the unsteady aerostat platform is set up. In the simulations, the proposed scheme is validated such that under the influence of different unstable platform movements, a better imaging performance can be achieved compared with the conventional MSCI method. Full article
(This article belongs to the Special Issue Recent Advancements in Radar Imaging and Sensing Technology)
Show Figures

Figure 1

Figure 1
<p>Imaging geometry of MSCI based on unsteady aerostat platform.</p>
Full article ">Figure 2
<p>Graphical diagram for the altitude angles.</p>
Full article ">Figure 3
<p>Pulse and POS data timing diagram.</p>
Full article ">Figure 4
<p>Target image.</p>
Full article ">Figure 5
<p>The motion trajectory of each component. (<b>a</b>) the translational component along the <math display="inline"><semantics> <msub> <mi>X</mi> <mi>t</mi> </msub> </semantics></math>; (<b>b</b>) the translational component along the <math display="inline"><semantics> <msub> <mi>Y</mi> <mi>t</mi> </msub> </semantics></math>; (<b>c</b>) the translational component along the <math display="inline"><semantics> <msub> <mi>Z</mi> <mi>t</mi> </msub> </semantics></math>; (<b>d</b>) the rotation of yaw; (<b>e</b>) the rotation of pitch; (<b>f</b>) the rotation of roll.</p>
Full article ">Figure 6
<p>The imaging results of UPIM, DPDIM, and SPIM. (<b>a</b>) the reconstructed image by UPIM, the NMSE is 0.28; (<b>b</b>) the reconstructed image by DPDIM, the NMSE is 0.94; (<b>c</b>) the reconstructed image by SPIM, the NMSE is 1.21.</p>
Full article ">Figure 7
<p>The point spread function of UPIM, DPIM, and SPIM. (<b>a</b>) UPIM; (<b>b</b>) DPIM; (<b>c</b>) SPIM.</p>
Full article ">Figure 8
<p>The profile of the point spread function of UPIM, DPIM, and SPIM. (<b>a</b>) the <span class="html-italic">X</span>-axis profile of the point spread function; (<b>b</b>) the <span class="html-italic">Y</span>-axis profile of the point spread function.</p>
Full article ">Figure 9
<p>The estimated and the real trajectory of the translation along the <math display="inline"><semantics> <msub> <mi>X</mi> <mi>t</mi> </msub> </semantics></math>. (<b>a</b>) UPIM; (<b>b</b>) DPDIM; (<b>c</b>) SPIM.</p>
Full article ">Figure 10
<p>NMSE of the imaging results by UPIM, DPDIM, and SPIM at different amplitudes of translation.</p>
Full article ">Figure 11
<p>NMSE of the imaging results by UPIM, DPDIM, and SPIM at different amplitudes of rotation.</p>
Full article ">Figure 12
<p>The imaging results of UPIM when only one translation component exists. (<b>a</b>) Only translational component along the <math display="inline"><semantics> <msub> <mi>X</mi> <mi>t</mi> </msub> </semantics></math> exists, the NMSE is 0.21; (<b>b</b>) Only translational component along the <math display="inline"><semantics> <msub> <mi>Y</mi> <mi>t</mi> </msub> </semantics></math> exists, the NMSE is 0.58; (<b>c</b>) Only translational component along the <math display="inline"><semantics> <msub> <mi>Z</mi> <mi>t</mi> </msub> </semantics></math> exists, the NMSE is 0.59.</p>
Full article ">Figure 13
<p>NMSE of the imaging results when only one translational component exists at different amplitudes.</p>
Full article ">Figure 14
<p>The estimation error of antenna position in three-dimensional translations. (<b>a</b>) Only along the <math display="inline"><semantics> <msub> <mi>X</mi> <mi>t</mi> </msub> </semantics></math>; (<b>b</b>) Only along the <math display="inline"><semantics> <msub> <mi>Y</mi> <mi>t</mi> </msub> </semantics></math>; (<b>c</b>) Only along the <math display="inline"><semantics> <msub> <mi>Z</mi> <mi>t</mi> </msub> </semantics></math>.</p>
Full article ">Figure 15
<p>The imaging results of UPIM when only one rotation component exists. (<b>a</b>) Only yaw angle, the NMSE is 0.27; (<b>b</b>) Only pitch angle, the NMSE is 0.28; (<b>c</b>) Only roll angle, the NMSE is 0.28.</p>
Full article ">Figure 16
<p>NMSE of the imaging results when only one rotation component exists at different amplitudes.</p>
Full article ">Figure 17
<p>The NMSE of imaging result under different measuring accuracy. (<b>a</b>) position-measuring accuracy; (<b>b</b>) angular-measuring accuracy.</p>
Full article ">
13 pages, 2602 KiB  
Article
Facile Non-Enzymatic Electrochemical Sensing for Glucose Based on Cu2O–BSA Nanoparticles Modified GCE
by Zhikuang Dai, Ailing Yang, Xichang Bao and Renqiang Yang
Sensors 2019, 19(12), 2824; https://doi.org/10.3390/s19122824 - 24 Jun 2019
Cited by 31 | Viewed by 5540
Abstract
Transition-metal nanomaterials are very important to non-enzymatic glucose sensing because of their excellent electrocatalytic ability, good selectivity, the fact that they are not easily interfered with by chloride ion (Cl), and low cost. However, the linear detection range needs to be [...] Read more.
Transition-metal nanomaterials are very important to non-enzymatic glucose sensing because of their excellent electrocatalytic ability, good selectivity, the fact that they are not easily interfered with by chloride ion (Cl), and low cost. However, the linear detection range needs to be expanded. In this paper, Cu2O–bovine serum albumin (BSA) core-shell nanoparticles (NPs) were synthesized for the first time in air at room temperature by a facile and green route. The structure and morphology of Cu2O–BSA NPs were characterized. The as-prepared Cu2O–BSA NPs were used to modify the glassy carbon electrode (GCE) in a Nafion matrix. By using cyclic voltammetry (CV), the influence from scanning speed, concentration of NaOH, and load of Cu2O–BSA NPs for the modified electrodes was probed. Cu2O–BSA NPs showed direct electrocatalytic activity for the oxidation of glucose in 50 mM NaOH solution at 0.6 V. The chronoamperometry result showed this constructing sensor in the detection of glucose with a lowest detection limit of 0.4 μM, a linear detection range up to 10 mM, a high sensitivity of 1144.81 μAmM−1cm−2 and reliable anti-interference property to Cl, uric acid (UA), ascorbic acid (AA), and acetaminophen (AP). Cu2O–BSA NPs are promising nanostructures for the fabrication of non-enzymatic glucose electrochemical sensing devices. Full article
(This article belongs to the Special Issue Electrochemical Nanobiosensors)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>TEM images (<b>A</b>–<b>C</b>) and XRD pattern (<b>D</b>) of Cu<sub>2</sub>O–BSA NPs. The red lines in D are the standard peaks (PDF No. 65-3288) of Cu<sub>2</sub>O.</p>
Full article ">Figure 2
<p>(<b>A</b>) Redox peaks of the present Nafion/Cu<sub>2</sub>O–BSA/GCE in the 50 mM NaOH solution with 0 (a) and 2 mM (b) of glucose at the scan rate of 5 mV/s, (<b>B</b>) Nafion/Cu<sub>2</sub>O–BSA/GCE and Nafion/GCE (inset) in the absence (a and c) and presence (b and d) of 5.0 mM glucose at the same pH with scanning speed of 50 mV/s.</p>
Full article ">Figure 3
<p>(<b>A</b>) CVs of the Nafion/Cu<sub>2</sub>O–BSA/GCE in 50 mM NaOH solution with 2 mM of glucose at different scan rates, (<b>B</b>) Linear dependence of oxidation and reduction peak current densities in (A) with the scan rates, (<b>C</b>) CVs at different concentrations of NaOH, (<b>D</b>) Current density changing with the loads of Cu<sub>2</sub>O–BSA NPs modified on the GCE. 2 mM of glucose and 50 mV/s scan rate in (C) and (D).</p>
Full article ">Figure 4
<p>Chroamperometric response (<b>A</b>), calibration curve (<b>B</b>) and detection limit (<b>C</b>) of Nafion/Cu<sub>2</sub>O–BSA/GCE for successive addition of various concentrations of glucose to 50 mM NaOH solution at 0.6 V.</p>
Full article ">Figure 5
<p>(<b>A</b>) Chroamperometric response of Nafion/Cu<sub>2</sub>O–BSA/GCE to glucose in 0 M (a) and 0.1 M (b) chloride ions; (<b>B</b>) Interference test of Nafion/Cu<sub>2</sub>O–BSA/GCE to 0.3 mM UA, 0.1 mM AA and 0.1 mM AP in 50 m NaOH at 0.6 V.</p>
Full article ">Figure 6
<p>Amperometric response of Nafion/Cu<sub>2</sub>O–BSA/GCE to glucose over time.</p>
Full article ">
18 pages, 3766 KiB  
Article
Trajectory Optimization in a Cooperative Aerial Reconnaissance Model
by Petr Stodola, Jan Drozd, Jan Nohel, Jan Hodický and Dalibor Procházka
Sensors 2019, 19(12), 2823; https://doi.org/10.3390/s19122823 - 24 Jun 2019
Cited by 13 | Viewed by 3559
Abstract
In recent years, the use of modern technology in military operations has become standard practice. Unmanned systems play an important role in operations such as reconnaissance and surveillance. This article examines a model for planning aerial reconnaissance using a fleet of mutually cooperating [...] Read more.
In recent years, the use of modern technology in military operations has become standard practice. Unmanned systems play an important role in operations such as reconnaissance and surveillance. This article examines a model for planning aerial reconnaissance using a fleet of mutually cooperating unmanned aerial vehicles to increase the effectiveness of the task. The model deploys a number of waypoints such that, when every waypoint is visited by any vehicle in the fleet, the area of interest is fully explored. The deployment of waypoints must meet the conditions arising from the technical parameters of the sensory systems used and tactical requirements of the task at hand. This paper proposes an improvement of the model by optimizing the number and position of waypoints deployed in the area of interest, the effect of which is to improve the trajectories of individual unmanned systems, and thus increase the efficiency of the operation. To achieve this optimization, a modified simulated annealing algorithm is proposed. The improvement of the model is verified by several experiments. Two sets of benchmark problems were designed: (a) benchmark problems for verifying the proposed algorithm for optimizing waypoints, and (b) benchmark problems based on typical reconnaissance scenarios in the real environment to prove the increased effectiveness of the reconnaissance operation. Moreover, an experiment in the SteelBeast simulation system was also conducted. Full article
Show Figures

Figure 1

Figure 1
<p>Example situation: (<b>a</b>) area of interest covered by waypoints; (<b>b</b>) trajectories for individual unmanned aerial systems (UASs).</p>
Full article ">Figure 2
<p>Improvement in the trajectories.</p>
Full article ">Figure 3
<p>Parameters for deployment of waypoints.</p>
Full article ">Figure 4
<p>Original waypoint deployment algorithm.</p>
Full article ">Figure 5
<p>Algorithm for optimization of the number of waypoints.</p>
Full article ">Figure 6
<p>Principle of calculating the longest distance.</p>
Full article ">Figure 7
<p>Simulated annealing for the problem of optimizing waypoint positions.</p>
Full article ">Figure 8
<p>Example of a benchmark problem (v03).</p>
Full article ">Figure 9
<p>Benchmark problem r04: (<b>a</b>) original model; (<b>b</b>) new model; (<b>c</b>) Tactical Decision Support System (TDSS).</p>
Full article ">Figure 10
<p>Comparison of the new model with the lawnmower pattern for benchmark problem r04: (<b>a</b>) new model; (<b>b</b>) lawnmower pattern.</p>
Full article ">Figure 11
<p>Experiment for simulation (r01): (<b>a</b>) original model; (<b>b</b>) new model.</p>
Full article ">
15 pages, 20848 KiB  
Article
Characterization of Nile Red as a Tracer for Laser-Induced Fluorescence Spectroscopy of Gasoline and Kerosene and Their Mixture with Biofuels
by Matthias Koegl, Christopher Mull, Kevin Baderschneider, Jan Wislicenus, Stefan Will and Lars Zigan
Sensors 2019, 19(12), 2822; https://doi.org/10.3390/s19122822 - 24 Jun 2019
Cited by 18 | Viewed by 5121
Abstract
Suitable fluorescence tracers (“dyes”) are needed for the planar measurement of droplet sizes by using a combination of laser-induced fluorescence (LIF) and Mie scattering. Currently, no suitable tracers have been characterized for application in planar droplet sizing in gasoline and kerosene fuels, as [...] Read more.
Suitable fluorescence tracers (“dyes”) are needed for the planar measurement of droplet sizes by using a combination of laser-induced fluorescence (LIF) and Mie scattering. Currently, no suitable tracers have been characterized for application in planar droplet sizing in gasoline and kerosene fuels, as well as biofuel blends. One promising tracer is nile red, which belongs to the fluorophore group. For its utilization for droplet size measurements, preliminary characterization of the fluorescence of the respective fuel tracer mixtures are mandatory. For this purpose, the fluorescence and absorption behavior of nile red dissolved in the surrogate fuels Toliso and Jet A-1 as well as in biofuel blends was investigated. The fluorescence signal for nile red that was dissolved in the two base fuels Toliso and Jet A-1 showed a linear behavior as a function of dye concentration. The temperature effect on spectral absorption and emission of nile red was investigated in a specially designed test cell. An ethanol admixture to Toliso led to a spectral shift towards higher wavelengths. The absorption and emission bands were shifted towards lower wavelengths with increasing temperature for all fuels. Both absorption and fluorescence decreased with increasing temperature for all fuels, except for E20, which showed an increased fluorescence signal with increasing temperature. Jet A-1 and its blends with hydroprocessed esters and fatty acids (HEFA) and farnesane did not exhibit explicit variations in spectral absorption or emission, but these blends showed a more distinct temperature dependence compared to the Toliso-ethanol-blends. The effect of photo-dissociation of the LIF signal of the fuel tracer mixtures was studied, and all fuel mixtures besides Toliso showed a more or less distinct decay in the fluorescence signal with time. In summary, all investigated fuel-tracer mixtures are suitable for LIF/Mie ratio droplet sizing in combination with nile red at moderate temperatures and low evaporation cooling rates. Full article
Show Figures

Figure 1

Figure 1
<p>Optical arrangement of the laser-induced fluorescence (LIF)-setup (left); detail of the internal design of the microcell (right).</p>
Full article ">Figure 2
<p>Emission spectra of nile red (left: normalized to maximum intensity at 9.38 mg/L; right: all spectra normalized to their respective maximum value; inserted diagram showing linearity of the integral LIF-signal) for various dye concentrations in Toliso and Jet A-1, 293 K, 0.1 MPa</p>
Full article ">Figure 3
<p>Normalized absorption (left curves) and emission spectra (right) for nile red (9.38 mg/L) in Toliso at various temperatures, 0.1 MPa. (Please note that the absorption signal is divided by a factor of two for clarity).</p>
Full article ">Figure 4
<p>Spectral absorption and emission spectra for nile red (9.38 mg/L) in various fuels at various temperatures, 0.1 MPa. (Please note that the absorption signal is divided by a factor of two for clarity).</p>
Full article ">Figure 5
<p>Normalized integral fluorescence intensities of the dye nile red in the investigated fuels for various temperatures, 0.1 MPa. All standard deviations are &lt;0.5% and are smaller than the symbols. The moderate temperature domain is marked grey.</p>
Full article ">Figure 6
<p>Normalized absorption and emission spectra for nile red (9.375 mg/L) for various automotive (<b>a</b>) and aviation (<b>b</b>) fuel mixtures. (Please note that the absorption signal is divided by a factor of two for clarity). Normalized absorption spectra (<b>c</b>) of the investigated fuel-dye mixtures; all spectra are normalized to the maximum absorption of E40; normalized emission spectra (<b>d</b>) of the investigated fuel-dye mixtures; all spectra are normalized to the maximum signal of E20, 293 K, 0.1 MPa.</p>
Full article ">Figure 7
<p>Photo-bleaching effect of the investigated fuel-dye mixtures at 293 K, 0.1 MPa. All standard deviations are &lt;0.5% and are smaller than the symbols.</p>
Full article ">
11 pages, 4734 KiB  
Article
Trimodal Waveguide Demonstration and Its Implementation as a High Order Mode Interferometer for Sensing Application
by Jhonattan C. Ramirez, Lucas H. Gabrielli, Laura M. Lechuga and Hugo E. Hernandez-Figueroa
Sensors 2019, 19(12), 2821; https://doi.org/10.3390/s19122821 - 24 Jun 2019
Cited by 15 | Viewed by 3274
Abstract
This work implements and demonstrates an interferometric transducer based on a trimodal optical waveguide concept. The readout signal is generated from the interference between the fundamental and second-order modes propagating on a straight polymer waveguide. Intuitively, the higher the mode order, the larger [...] Read more.
This work implements and demonstrates an interferometric transducer based on a trimodal optical waveguide concept. The readout signal is generated from the interference between the fundamental and second-order modes propagating on a straight polymer waveguide. Intuitively, the higher the mode order, the larger the fraction of power (evanescent field) propagating outside the waveguide core, hence the higher the sensitivity that can be achieved when interfering against the strongly confined fundamental mode. The device is fabricated using the polymer SU-8 over a SiO2 substrate and shows a free spectral range of 20.2 nm and signal visibility of 5.7 dB, reaching a sensitivity to temperature variations of 0.0586 dB/°C. The results indicate that the proposed interferometer is a promising candidate for highly sensitive, compact and low-cost photonic transducer for implementation in different types of sensing applications, among these, point-of-care. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Scheme of the Trimodal waveguide sensor showing the expected light modes in each section.</p>
Full article ">Figure 2
<p>(<b>a</b>) Cutoff analysis for 600 nm height waveguide. (<b>b</b>) Effective area analysis as function of the width variation in a single mode section waveguide at the single channel modal interferometer with 600 nm height, working at 1550 nm wavelength. Analysis of the percentage of energy coupled in the second order mode as function of the width variation in a (<b>c</b>) single mode section and in the (<b>d</b>) multimode section, at the same wavelength.</p>
Full article ">Figure 3
<p>Fabrication and characterization system of the trimodal interferometer device. (<b>a</b>) The SEM image of a trimodal device, highlighting the single- and multimode area and the step junction section, respectively. (<b>b</b>) The experimental setup for the trimodal interferometers. (<b>c</b>) Design of the experimental setup to characterize the manufactured interferometric devices.</p>
Full article ">Figure 4
<p>Sensitivity measurement in trimodal devices. (<b>a</b>) Measured interferometric signal resulting from the modal interaction within a trimodal interferometer with <math display="inline"><semantics> <mrow> <mn>3.5</mn> </mrow> </semantics></math> <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>m width in the single mode section and <math display="inline"><semantics> <mrow> <mn>10.5</mn> </mrow> </semantics></math> <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>m width in the multimodal region. (<b>b</b>) FSR and visibility analysis for a trimodal interferometric device with 600 nm height and cross-section with single mode section width of <math display="inline"><semantics> <mrow> <mn>3.5</mn> </mrow> </semantics></math> <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>m and variable multimode section width between 10 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>m and 12 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>m, with a <math display="inline"><semantics> <mrow> <mn>0.5</mn> </mrow> </semantics></math> <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>m pitch. (<b>c</b>) Normalized transmittance as function of temperature variation for fringe peaks at 1485 nm, 1505 nm and 1525 nm.</p>
Full article ">Figure 5
<p>Fringe power measured as function of temperature variation at 1485 nm wavelength peak.</p>
Full article ">Figure 6
<p>Bulk sensitivities for 10 mm and 15 mm length in the sensing area. (<b>a</b>) Simulated sensitivity for fabricated devices, as function of refractive index variation on the sensing area, for TE propagation mode. (<b>b</b>,<b>c</b>) Simulated sensitivity for optimized trimodal component, as function of refractive index variation on the sensing area, and variation of the width dimensions in the trimodal region, respectively. (<b>d</b>) Free Spectral Range for the optimized trimodal component.</p>
Full article ">
16 pages, 1436 KiB  
Article
Autonomic Nervous System Response during Light Physical Activity in Adolescents with Anorexia Nervosa Measured by Wearable Devices
by Lucia Billeci, Alessandro Tonacci, Elena Brunori, Rossella Raso, Sara Calderoni, Sandra Maestro and Maria Aurora Morales
Sensors 2019, 19(12), 2820; https://doi.org/10.3390/s19122820 - 24 Jun 2019
Cited by 15 | Viewed by 4948
Abstract
Anorexia nervosa (AN) is associated with a wide range of disturbances of the autonomic nervous system. The aim of the present study was to monitor the heart rate (HR) and the heart rate variability (HRV) during light physical activity in a group of [...] Read more.
Anorexia nervosa (AN) is associated with a wide range of disturbances of the autonomic nervous system. The aim of the present study was to monitor the heart rate (HR) and the heart rate variability (HRV) during light physical activity in a group of adolescent girls with AN and in age-matched controls using a wearable, minimally obtrusive device. For the study, we enrolled a sample of 23 adolescents with AN and 17 controls. After performing a 12-lead electrocardiogram and echocardiography, we used a wearable device to record a one-lead electrocardiogram for 5 min at baseline for 5 min during light physical exercise (Task) and for 5 min during recovery. From the recording, we extracted HR and HRV indices. Among subjects with AN, the HR increased at task and decreased at recovery, whereas among controls it did not change between the test phases. HRV features showed a different trend between the two groups, with an increased low-to-high frequency ratio (LF/HF) in the AN group due to increased LF and decreased HF, differently from controls that, otherwise, slightly increased their standard deviation of NN intervals (SDNN) and the root mean square of successive differences (RMSSD). The response in the AN group during the task as compared to that of healthy adolescents suggests a possible sympathetic activation or parasympathetic withdrawal, differently from controls. This result could be related to the low energy availability associated to the excessive loss of fat and lean mass in subjects with AN, that could drive to autonomic imbalance even during light physical activity. Full article
(This article belongs to the Special Issue Wearable Sensors and Devices for Healthcare Applications)
Show Figures

Figure 1

Figure 1
<p>Wearable sensor for electrocardiographic (ECG) signal acquisition based on ECG SHIMMER<sup>TM</sup> device.</p>
Full article ">Figure 2
<p>Original ECG signal (red) and signal obtained after preprocessing (blue) for a sample subject during Task.</p>
Full article ">Figure 3
<p>Change in temporal-domain features during baseline, task, and recovery for the AN group and controls. (<b>a</b>) Heart rate (HR); (<b>b</b>) standard deviation of NN intervals (SDNN); and (<b>c</b>) root mean square of successive differences (RMSSD). *: <span class="html-italic">p</span> &lt; 0.05; continuous line: significant differences in the AN group; dashed line: significant differences in controls.</p>
Full article ">Figure 4
<p>Change in frequency domain features during baseline, task, and recovery for the AN group and controls. (<b>a</b>) Low frequency/high frequency ratio (LF/HF), (<b>b</b>) normalized low frequency (LFn), and (<b>c</b>) normalized high frequency (HFn). *: <span class="html-italic">p</span> &lt; 0.05; continuous line: significant differences in the AN group; dashed line: significant differences in controls.</p>
Full article ">
22 pages, 12660 KiB  
Article
An Investigation on a Quantitative Tomographic SHM Technique for a Containment Liner Plate in a Nuclear Power Plant with Guided Wave Mode Selection
by Yonghee Lee and Younho Cho
Sensors 2019, 19(12), 2819; https://doi.org/10.3390/s19122819 - 24 Jun 2019
Cited by 10 | Viewed by 4189
Abstract
The containment liner plate (CLP) in a nuclear power plant is the most critical part of the structure of a power plant, as it prevents the radioactive contamination of the surrounding area. This paper presents feasibility of structural health monitoring (SHM) and an [...] Read more.
The containment liner plate (CLP) in a nuclear power plant is the most critical part of the structure of a power plant, as it prevents the radioactive contamination of the surrounding area. This paper presents feasibility of structural health monitoring (SHM) and an elastic wave tomography method based on ultrasonic guided waves (GW), for evaluating the integrity of CLP. It aims to check the integrity for a dynamic response to a damaged isotropic structure. The proposed SHM technique relies on sensors and, therefore, it can be placed on the structure permanently and can monitor either passively or actively. For applying this method, a suitable guided wave mode tuning is required to verify wave propagation. A finite element analysis (FEA) is performed to figure out the suitable GW mode for a CLP by considering geometric and material condition. Furthermore, elastic wave tomography technique is modified to evaluate the CLP condition and its visualization. A modified reconstruction algorithm for the probabilistic inspection of damage tomography algorithm is used to quantify corrosion defects in the CLP. The location and shape of the wall-thinning defects are successfully obtained by using elastic GW based SHM. Making full use of verified GW mode to Omni-directional transducer, it can be expected to improve utilization of the SHM based evaluation technique for CLP. Full article
Show Figures

Figure 1

Figure 1
<p>Dispersion curve for CLP mock-up specimen. (<b>a</b>) dispersion curve for phase velocity (<b>b</b>) dispersion curve for group velocity.</p>
Full article ">Figure 2
<p>Basic concept of image reconstruction using the RAPID algorithm.</p>
Full article ">Figure 3
<p>Schematic of CLP mock-up specimen (<b>a</b>) real positions of defect locations; (<b>b</b>) size information of defects on the CLP mock-up plate.</p>
Full article ">Figure 4
<p>Partition information of the CLP mock-up specimen.</p>
Full article ">Figure 5
<p>Schematic of part size and transducer location on the specimen (indicated as red circles).</p>
Full article ">Figure 6
<p>Experimental setup for generating GWs.</p>
Full article ">Figure 7
<p>Magnitude of GW signal from transmitter–receiver pair under 400 mm propagation distance (<b>a</b>) A0 1.0 MHz·mm; (<b>b</b>) S0 2.0 MHz·mm; (<b>c</b>) A1 3.0 MHz·mm; (<b>d</b>) S1 3.5 MHz·mm; (<b>e</b>) A1 3.6 MHz·mm; (<b>f</b>) S0 3.0 MHz·mm.</p>
Full article ">Figure 7 Cont.
<p>Magnitude of GW signal from transmitter–receiver pair under 400 mm propagation distance (<b>a</b>) A0 1.0 MHz·mm; (<b>b</b>) S0 2.0 MHz·mm; (<b>c</b>) A1 3.0 MHz·mm; (<b>d</b>) S1 3.5 MHz·mm; (<b>e</b>) A1 3.6 MHz·mm; (<b>f</b>) S0 3.0 MHz·mm.</p>
Full article ">Figure 8
<p>Wave structure analysis of Lamb wave modes; (<b>a</b>) S0 3.0 MHz·mm; (<b>b</b>) A1 3.6 MHz·mm.</p>
Full article ">Figure 9
<p>Hann window signal for excitation of wave signal (<b>a</b>) 500 kHz for S0 mode; (<b>b</b>) 600 kHz for A1 mode.</p>
Full article ">Figure 10
<p>CLP mock-up modeling specification for FEA simulation.</p>
Full article ">Figure 11
<p>Geometric information of mesh for generating Lamb wave (<b>a</b>) 500 kHz for S0 mode; (<b>b</b>) 600 kHz for A1 mode.</p>
Full article ">Figure 12
<p>Magnitude of GW signal from ABAQUS FEA simulation under 400 mm propagation distance (<b>a</b>) A1 3.6 MHz·mm; (<b>b</b>) S0 3.0 MHz·mm.</p>
Full article ">Figure 13
<p>Propagation motions of Lamb waves from ABAQUS FEA simulation under 400 mm propagation distance (<b>a</b>) A1 3.6 MHz·mm; (<b>b</b>) S0 3.0 MHz·mm.</p>
Full article ">Figure 14
<p>Lamb wave energy variation with respect to wall-thinning depth (<b>a</b>) modeling of wall-thinning defects for FEA; (<b>b</b>) energy variation results in comparison with a no-defect condition.</p>
Full article ">Figure 15
<p>Wall-thinning defects information for experimental verification: (<b>a</b>) defect No. 5 with wall-thinning depth = 2 mm; (<b>b</b>) defect No. 9 with wall-thinning depth = 1 mm.</p>
Full article ">Figure 16
<p>Experimental amplitude and wave mode variation of Lamb wave modes with respect to wall-thinning status: (<b>a</b>) A1 3.6 MHz·mm; (<b>b</b>) S0 3.0 MHz·mm.</p>
Full article ">Figure 17
<p>Wave propagation network and defect locations of Part 1 for verifying suitable mode selection by image reconstruction techniques.</p>
Full article ">Figure 18
<p>RAPID tomographic results for GW modes: (<b>a</b>) A1 3.6 MHz·mm case; (<b>b</b>) S0 3.0 MHz·mm case.</p>
Full article ">Figure 19
<p>Overall results of tomographic images for CLP mock-up specimen: (<b>a</b>) part 1; (<b>b</b>) part 2; (<b>c</b>) part 3; (<b>d</b>) part 4; (<b>e</b>) part 5; (<b>f</b>) part 6; (<b>g</b>) part 7; (<b>h</b>) part 8.</p>
Full article ">Figure 19 Cont.
<p>Overall results of tomographic images for CLP mock-up specimen: (<b>a</b>) part 1; (<b>b</b>) part 2; (<b>c</b>) part 3; (<b>d</b>) part 4; (<b>e</b>) part 5; (<b>f</b>) part 6; (<b>g</b>) part 7; (<b>h</b>) part 8.</p>
Full article ">Figure 19 Cont.
<p>Overall results of tomographic images for CLP mock-up specimen: (<b>a</b>) part 1; (<b>b</b>) part 2; (<b>c</b>) part 3; (<b>d</b>) part 4; (<b>e</b>) part 5; (<b>f</b>) part 6; (<b>g</b>) part 7; (<b>h</b>) part 8.</p>
Full article ">Figure 20
<p>Tomography result of CLP mock-up specimen: comparison of defects information and tomographic image of CLP mock-up specimen.</p>
Full article ">
27 pages, 364 KiB  
Review
Exact Closed-Form Multitarget Bayes Filters
by Ronald Mahler
Sensors 2019, 19(12), 2818; https://doi.org/10.3390/s19122818 - 24 Jun 2019
Cited by 18 | Viewed by 3474
Abstract
The finite-set statistics (FISST) foundational approach to multitarget tracking and information fusion has inspired work by dozens of research groups in at least 20 nations; and FISST publications have been cited tens of thousands of times. This review paper addresses a recent and [...] Read more.
The finite-set statistics (FISST) foundational approach to multitarget tracking and information fusion has inspired work by dozens of research groups in at least 20 nations; and FISST publications have been cited tens of thousands of times. This review paper addresses a recent and cutting-edge aspect of this research: exact closed-form—and, therefore, provably Bayes-optimal—approximations of the multitarget Bayes filter. The five proposed such filters—generalized labeled multi-Bernoulli (GLMB), labeled multi-Bernoulli mixture (LMBM), and three Poisson multi-Bernoulli mixture (PMBM) filter variants—are assessed in depth. This assessment includes a theoretically rigorous, but intuitive, statistical theory of “undetected targets”, and concrete formulas for the posterior undetected-target densities for the “standard” multitarget measurement model. Full article
(This article belongs to the Section Physical Sensors)
23 pages, 1522 KiB  
Article
A Secure and Efficient Digital-Data-Sharing System for Cloud Environments
by Zhen-Yu Wu
Sensors 2019, 19(12), 2817; https://doi.org/10.3390/s19122817 - 24 Jun 2019
Cited by 3 | Viewed by 4234
Abstract
“Education Cloud” is a cloud-computing application used in educational contexts to facilitate the use of comprehensive digital technologies and establish data-based learning environments. The immense amount of digital resources, data, and teaching materials involved in these environments must be stored in robust data-access [...] Read more.
“Education Cloud” is a cloud-computing application used in educational contexts to facilitate the use of comprehensive digital technologies and establish data-based learning environments. The immense amount of digital resources, data, and teaching materials involved in these environments must be stored in robust data-access systems. These systems must be equipped with effective security mechanisms to guarantee confidentiality and ensure the integrity of the cloud-computing environment. To minimize the potential risk of privacy exposure, digital sharing service providers must encrypt their digital resources, data, and teaching materials, and digital-resource owners must have complete control over what data or materials they share. In addition, the data in these systems must be accessible to e-learners. In other words, data-access systems should not only encrypt data, but also provide access control mechanisms by which users may access the data. In cloud environments, digital sharing systems no longer target single users, and the access control by numerous users may overload a system and increase management burden and complexity. This study addressed these challenges to create a system that preserves the benefits of combining digital sharing systems and cloud computing. A cloud-based and learner-centered access control mechanism suitable for multi-user digital sharing was developed. The proposed mechanism resolves the problems concerning multi-user access requests in cloud environments and dynamic updating in digital-sharing systems, thereby reducing the complexity of security management. Full article
(This article belongs to the Special Issue Selected Papers from TIKI IEEE ICASI 2019)
Show Figures

Figure 1

Figure 1
<p>Access diagram for the cloud-based digital sharing system.</p>
Full article ">Figure 2
<p>Context database and query set for the proposed digital sharing system.</p>
Full article ">
11 pages, 3695 KiB  
Article
An Approach to Measure Tilt Motion, Straightness and Position of Precision Linear Stage with a 3D Sinusoidal-Groove Linear Reflective Grating and Triangular Wave-Based Subdivision Method
by Hsiu-An Tsai and Yu-Lung Lo
Sensors 2019, 19(12), 2816; https://doi.org/10.3390/s19122816 - 24 Jun 2019
Cited by 3 | Viewed by 4293
Abstract
This work presents a novel and compact method for simultaneously measuring errors in linear displacement and vertical straightness of a moving linear air-bearing stage using 3D sinusoidal-groove linear reflective grating and a novel triangular wave-based sequence signal analysis method. The new scheme is [...] Read more.
This work presents a novel and compact method for simultaneously measuring errors in linear displacement and vertical straightness of a moving linear air-bearing stage using 3D sinusoidal-groove linear reflective grating and a novel triangular wave-based sequence signal analysis method. The new scheme is distinct from the previous studies as it considers two signals to analyze linear displacement and vertical straightness. In addition, the tilt motion of the precision linear stage could also be measured using the 3D sinusoidal-groove linear reflective grating. The proposed system is similar to a linear encoder and can make online measurements of stage errors to analyze automatic processes and also be used for real-time monitoring. The performance of the proposed method and its reliability have been verified by experiments. The experiments show that the maximum error of measured tilt angle, linear displacement, and vertical straightness error is less than 0.058°, 0.239 μm, and 0.188 μm, respectively. The maximum repeatability error on measurement of tilt angle, linear displacement, and vertical straightness error is less than ±0.189o, ±0.093 μm, and ±0.016 μm, respectively. The proposed system is suitable for error compensation in the multi-axis system and finds application in most industries. Full article
(This article belongs to the Special Issue Selected Papers from IEEE ICKII 2019)
Show Figures

Figure 1

Figure 1
<p>Configuration of the proposed measurement system.</p>
Full article ">Figure 2
<p>Clock Waveform: (<b>a</b>) Positive Clock Pulse; (<b>b</b>) Negative Clock Pulse.</p>
Full article ">Figure 3
<p>Photograph of the circuit board of the proposed triangular wave-based pulse-triggering method, and scheme illustration of the corresponding signal output in measurement system. <span class="html-italic">I<sub>0</sub></span>: quadrant photo diode (QPD) output signal, <span class="html-italic">I<sub>S</sub></span>: the corresponding signal output from the QPD, <span class="html-italic">I<sub>T</sub></span>: the triangular signal output of the <span class="html-italic">I<sub>S</sub></span> signal, and <span class="html-italic">I<sub>P</sub></span>: the pulse signal output of the <span class="html-italic">I<sub>S</sub></span> signal. The <span class="html-italic">O<sub>P</sub></span> is the number of the pulse wave edge triggering counting.</p>
Full article ">Figure 4
<p>(<b>a</b>) Schematic of experimental setup with the proposed system. (<b>b</b>) Photograph of the experimental system.</p>
Full article ">Figure 5
<p>The measured profiles and photograph of the two-dimensional sinusoidal grating.</p>
Full article ">Figure 6
<p>The measured experimental results of tilt angle (θ<sub>Z</sub>) tests by the proposed system.</p>
Full article ">Figure 7
<p>The measured average results and error bars of linear positioning by the developed system and an interferometer measurement system (Renishaw XL-80).</p>
Full article ">Figure 8
<p>The measured results and error bars of vertical straightness by the developed system and an interferometer measurement system (Renishaw XL-80).</p>
Full article ">
14 pages, 3524 KiB  
Article
Photoacoustic/Ultrasound/Optical Coherence Tomography Evaluation of Melanoma Lesion and Healthy Skin in a Swine Model
by Karl Kratkiewicz, Rayyan Manwar, Ali Rajabi-Estarabadi, Joseph Fakhoury, Jurgita Meiliute, Steven Daveluy, Darius Mehregan and Kamran (Mohammad) Avanaki
Sensors 2019, 19(12), 2815; https://doi.org/10.3390/s19122815 - 24 Jun 2019
Cited by 61 | Viewed by 7865
Abstract
The marked increase in the incidence of melanoma coupled with the rapid drop in the survival rate after metastasis has promoted the investigation into improved diagnostic methods for melanoma. High-frequency ultrasound (US), optical coherence tomography (OCT), and photoacoustic imaging (PAI) are three potential [...] Read more.
The marked increase in the incidence of melanoma coupled with the rapid drop in the survival rate after metastasis has promoted the investigation into improved diagnostic methods for melanoma. High-frequency ultrasound (US), optical coherence tomography (OCT), and photoacoustic imaging (PAI) are three potential modalities that can assist a dermatologist by providing extra information beyond dermoscopic features. In this study, we imaged a swine model with spontaneous melanoma using these modalities and compared the images with images of nearby healthy skin. Histology images were used for validation. Full article
(This article belongs to the Special Issue Skin Sensors)
Show Figures

Figure 1

Figure 1
<p>Principle of photoacoustic imaging. (<b>a</b>) Schematic of photoacoustic imaging setup for the acquisition of images from swine skin. (<b>b</b>) Optical absorption spectrum for most abundant photoacoustic absorbers in the skin with dashed lines showing wavelengths used in this study. Left: 532 nm; right: 1064 nm.</p>
Full article ">Figure 2
<p>Ultrasound (US)/photoacoustic (PA) system components. (<b>a</b>) Optical coherence tomography (OCT) system. (<b>b</b>) US/PA DAQ, processing, and storage units, (i) Vantage 128 DAQ system, and (ii) processing unit. (<b>c</b>) US/PA probe specifications. (<b>d</b>) US/PA probe in use on swine melanoma lesion. DAQ: Data acquisition unit, HSL: High-speed swept-source laser.</p>
Full article ">Figure 3
<p>Imaged suspect lesions. (<b>a</b>) (i) Abdominal, dark-brown pigmented plaque with irregular border confirmed as melanoma (black-circle), (ii) histology of nearby healthy skin (red-circle), and (iii) histology of the suspect lesion. (<b>b</b>) (i) Flank, large dark-brown plaque confirmed as melanoma (circled), (ii) histology of nearby healthy skin, and (iii) histology of the suspect lesion.</p>
Full article ">Figure 4
<p>Ultrasound images of melanoma lesion and nearby healthy skin. (<b>a</b>) Abdominal: (i) Lesion, (ii) nearby healthy. (<b>b</b>) Flank: (i) Lesion, (ii) healthy. (<b>c</b>) Bar chart of average pixel intensity from epidermal region of US images. E: Epidermis, d: Dermis, sc: Subcutaneous tissue. Fibrotic septa (arrows), epidermal layer pixels (yellow dashes).</p>
Full article ">Figure 5
<p>OCT images of melanoma lesion and nearby healthy skin. (<b>a</b>) Abdominal: (i) Lesion and (ii) healthy. (<b>b</b>) Flank: (i) Lesion and (ii) healthy. Melanomas demonstrate disorganization and thickening of the epidermis, larger rete ridges, an obscured dermal–epidermal junction (DEJ), and dermal tumor nests. Yellow circles: Dermal nests of melanocytes. Red lines: Dermal–epidermal junction. Green brackets: Epidermis. Light blue brackets: Dermis.</p>
Full article ">Figure 6
<p>Photoacoustic images of melanoma lesion and nearby healthy skin at 532 nm illumination wavelength. (<b>a</b>) Abdominal: (i) Lesion and (ii) healthy. (<b>b</b>) Flank: (i) Lesion and (ii) healthy. (<b>c</b>) Bar chart of average pixel intensity from epidermal region in the PA images of 532 nm. E: Epidermis. Epidermal pixels (white dashes). The PA signal was increased in the melanoma, highlighting the increase in melanin.</p>
Full article ">Figure 7
<p>Photoacoustic images of melanoma lesion and nearby healthy skin at 1064 nm illumination wavelength. (<b>a</b>) Abdominal: (i) Lesion and (ii) healthy. (<b>b</b>) Flank: (i) Lesion and (ii) healthy. (<b>c</b>) Bar chart of average pixel intensity from epidermal region at 1064 nm images. E: Epidermis. Fibrotic septa (arrows); averaged epidermal pixels (white dashes).</p>
Full article ">
16 pages, 6228 KiB  
Article
A New Approach to Fall Detection Based on Improved Dual Parallel Channels Convolutional Neural Network
by Xiaoguang Liu, Huanliang Li, Cunguang Lou, Tie Liang, Xiuling Liu and Hongrui Wang
Sensors 2019, 19(12), 2814; https://doi.org/10.3390/s19122814 - 24 Jun 2019
Cited by 12 | Viewed by 3164
Abstract
Falls are the major cause of fatal and non-fatal injury among people aged more than 65 years. Due to the grave consequences of the occurrence of falls, it is necessary to conduct thorough research on falls. This paper presents a method for the [...] Read more.
Falls are the major cause of fatal and non-fatal injury among people aged more than 65 years. Due to the grave consequences of the occurrence of falls, it is necessary to conduct thorough research on falls. This paper presents a method for the study of fall detection using surface electromyography (sEMG) based on an improved dual parallel channels convolutional neural network (IDPC-CNN). The proposed IDPC-CNN model is designed to identify falls from daily activities using the spectral features of sEMG. Firstly, the classification accuracy of time domain features and spectrograms are compared using linear discriminant analysis (LDA), k-nearest neighbor (KNN) and support vector machine (SVM). Results show that spectrograms provide a richer way to extract pattern information and better classification performance. Therefore, the spectrogram features of sEMG are selected as the input of IDPC-CNN to distinguish between daily activities and falls. Finally, The IDPC-CNN is compared with SVM and three different structure CNNs under the same conditions. Experimental results show that the proposed IDPC-CNN achieves 92.55% accuracy, 95.71% sensitivity and 91.7% specificity. Overall, The IDPC-CNN is more effective than the comparison in accuracy, efficiency, training and generalization. Full article
(This article belongs to the Section Biosensors)
Show Figures

Figure 1

Figure 1
<p>The position of surface electromyography (sEMG) electrode.</p>
Full article ">Figure 2
<p>The 4 gestures considered in this work.</p>
Full article ">Figure 3
<p>The comparison of sEMG signals before and after denoising.</p>
Full article ">Figure 4
<p>Mean short-term energy value result.</p>
Full article ">Figure 5
<p>Threshold segmentation diagram.</p>
Full article ">Figure 6
<p>Effective signal segment extraction.</p>
Full article ">Figure 7
<p>Variance contribution rate.</p>
Full article ">Figure 8
<p>Spectrogram processing example.</p>
Full article ">Figure 9
<p>Feature extraction flow chart.</p>
Full article ">Figure 10
<p>Sliding window schematic.</p>
Full article ">Figure 11
<p>Accuracy comparison diagram.</p>
Full article ">Figure 12
<p>Improved dual parallel channels convolutional neural network structure.</p>
Full article ">Figure 13
<p>Dual parallel channel 1 convolutional neural network structure.</p>
Full article ">Figure 14
<p>Dual parallel channel 2 convolutional neural network structure.</p>
Full article ">Figure 15
<p>Single-channel convolutional neural network (CNN) structure.</p>
Full article ">Figure 16
<p>Train and test flow chart.</p>
Full article ">Figure 17
<p>Performance index comparison.</p>
Full article ">
16 pages, 812 KiB  
Article
Traffic Estimation for Large Urban Road Network with High Missing Data Ratio
by Kennedy John Offor, Lubos Vaci and Lyudmila S. Mihaylova
Sensors 2019, 19(12), 2813; https://doi.org/10.3390/s19122813 - 24 Jun 2019
Cited by 7 | Viewed by 3870
Abstract
Intelligent transportation systems require the knowledge of current and forecasted traffic states for effective control of road networks. The actual traffic state has to be estimated as the existing sensors does not capture the needed state. Sensor measurements often contain missing or incomplete [...] Read more.
Intelligent transportation systems require the knowledge of current and forecasted traffic states for effective control of road networks. The actual traffic state has to be estimated as the existing sensors does not capture the needed state. Sensor measurements often contain missing or incomplete data as a result of communication issues, faulty sensors or cost leading to incomplete monitoring of the entire road network. This missing data poses challenges to traffic estimation approaches. In this work, a robust spatio-temporal traffic imputation approach capable of withstanding high missing data rate is presented. A particle based approach with Kriging interpolation is proposed. The performance of the particle based Kriging interpolation for different missing data ratios was investigated for a large road network comprising 1000 segments. Results indicate that the effect of missing data in a large road network can be mitigated by the Kriging interpolation within the particle filter framework. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Stochastic compositional model (SCM) road network showing segments and measurement points [<a href="#B31-sensors-19-02813" class="html-bibr">31</a>].</p>
Full article ">Figure 2
<p>Covariance and variogram models.</p>
Full article ">Figure 3
<p>Spatio-temporal evolution of traffic flow for the 100 segment.</p>
Full article ">Figure 4
<p>Spatio-temporal evolution of traffic speed for the 100 segment.</p>
Full article ">Figure 5
<p>RMSE of speed at different missing data ratios.</p>
Full article ">Figure 6
<p>RMSE of flow at different missing data ratios.</p>
Full article ">Figure 7
<p>Estimated flow for the 100 segment with 30% missing data.</p>
Full article ">Figure 8
<p>Estimated speed for the 100 segment with 30% missing data.</p>
Full article ">
19 pages, 12454 KiB  
Article
Understanding Collective Human Mobility Spatiotemporal Patterns on Weekdays from Taxi Origin-Destination Point Data
by Jing Yang, Yizhong Sun, Bowen Shang, Lei Wang and Jie Zhu
Sensors 2019, 19(12), 2812; https://doi.org/10.3390/s19122812 - 24 Jun 2019
Cited by 19 | Viewed by 4002
Abstract
With the availability of large geospatial datasets, the study of collective human mobility spatiotemporal patterns provides a new way to explore urban spatial environments from the perspective of residents. In this paper, we constructed a classification model for mobility patterns that is suitable [...] Read more.
With the availability of large geospatial datasets, the study of collective human mobility spatiotemporal patterns provides a new way to explore urban spatial environments from the perspective of residents. In this paper, we constructed a classification model for mobility patterns that is suitable for taxi OD (Origin-Destination) point data, and it is comprised of three parts. First, a new aggregate unit, which uses a road intersection as the constraint condition, is designed for the analysis of the taxi OD point data. Second, the time series similarity measurement is improved by adding a normalization procedure and time windows to address the particular characteristics of the taxi time series data. Finally, the DBSCAN algorithm is used to classify the time series into different mobility patterns based on a proximity index that is calculated using the improved similarity measurement. In addition, we used the random forest algorithm to establish a correlation model between the mobility patterns and the regional functional characteristics. Based on the taxi OD point data from Nanjing, we delimited seven mobility patterns and illustrated that the regional functions have obvious driving effects on these mobility patterns. These findings are applicable to urban planning, traffic management and planning, and land use analyses in the future. Full article
(This article belongs to the Special Issue Mobile Sensing: Platforms, Technologies and Challenges)
Show Figures

Figure 1

Figure 1
<p>Overview of the procedure for collective human mobility spatiotemporal pattern recognition.</p>
Full article ">Figure 2
<p>Illustration of the study area (Nanjing, China): (<b>a</b>) the spatial distribution of the taxi one-day tracking points, (<b>b</b>) the spatial distribution of the taxi OD points and the 10 districts in Nanjing. OD, origin-destination.</p>
Full article ">Figure 3
<p>Illustration of the aggregate unit: (<b>a</b>) chart of the OD frequency in each direction of the road intersection, (<b>b</b>) chart of the distances from the OD points to the nearest road intersection, and (<b>c</b>) diagram of the aggregate unit.</p>
Full article ">Figure 3 Cont.
<p>Illustration of the aggregate unit: (<b>a</b>) chart of the OD frequency in each direction of the road intersection, (<b>b</b>) chart of the distances from the OD points to the nearest road intersection, and (<b>c</b>) diagram of the aggregate unit.</p>
Full article ">Figure 4
<p>Diagram of the DTW distance function considering the time window (DTW, dynamic time warping).</p>
Full article ">Figure 5
<p>Time series similarity measurement: (<b>a</b>) the unnormalized correlation between the DTW distance and the OD frequency, (<b>b</b>) the differences in the absolute distances within the same series, (<b>c</b>) the normalized correlation between the DTW distance and the OD frequency, and (<b>d</b>) the correlation between the CORT coefficient and the OD frequency.</p>
Full article ">Figure 6
<p>Determining the threshold value: (<b>a</b>) the correlation between the Eps value and the contour coefficient with different MinPts values and (<b>b</b>) the correlation between the Eps value and the number of clusters with different MinPts values.</p>
Full article ">Figure 7
<p>Clustering results of the simulation experiments: (<b>a</b>–<b>c</b>) the clustering results of the proposed method and (<b>d</b>–<b>f</b>) the clustering results of the K-means method.</p>
Full article ">Figure 8
<p>Determination of the clustering threshold value of the departure time series on weekdays: (<b>a</b>) the contour coefficients of the clustering results under different threshold combinations and (<b>b</b>) the number of clusters under different threshold combinations.</p>
Full article ">Figure 9
<p>(<b>a</b>–<b>h</b>) The departure time series of the collective human mobility patterns on weekdays and (<b>i</b>–<b>o</b>) the arrival time series of the collective human mobility patterns on weekdays.</p>
Full article ">Figure 9 Cont.
<p>(<b>a</b>–<b>h</b>) The departure time series of the collective human mobility patterns on weekdays and (<b>i</b>–<b>o</b>) the arrival time series of the collective human mobility patterns on weekdays.</p>
Full article ">Figure 10
<p>Spatial distribution map of the departure and arrival patterns on weekdays. PU, Pick-Up. DF, Drop-Off.</p>
Full article ">Figure 11
<p>Driving mechanisms of the different travel patterns on weekdays: (<b>a</b>–<b>g</b>) the feature importance and feature contribution metrics of different POIs of mode1–mode7. POIs, Points of interest.</p>
Full article ">Figure 11 Cont.
<p>Driving mechanisms of the different travel patterns on weekdays: (<b>a</b>–<b>g</b>) the feature importance and feature contribution metrics of different POIs of mode1–mode7. POIs, Points of interest.</p>
Full article ">
24 pages, 13117 KiB  
Article
A Low-Cost, Wireless, 3-D-Printed Custom Armband for sEMG Hand Gesture Recognition
by Ulysse Côté-Allard, Gabriel Gagnon-Turcotte, François Laviolette and Benoit Gosselin
Sensors 2019, 19(12), 2811; https://doi.org/10.3390/s19122811 - 24 Jun 2019
Cited by 55 | Viewed by 12580
Abstract
Wearable technology can be employed to elevate the abilities of humans to perform demanding and complex tasks more efficiently. Armbands capable of surface electromyography (sEMG) are attractive and noninvasive devices from which human intent can be derived by leveraging machine learning. However, the [...] Read more.
Wearable technology can be employed to elevate the abilities of humans to perform demanding and complex tasks more efficiently. Armbands capable of surface electromyography (sEMG) are attractive and noninvasive devices from which human intent can be derived by leveraging machine learning. However, the sEMG acquisition systems currently available tend to be prohibitively costly for personal use or sacrifice wearability or signal quality to be more affordable. This work introduces the 3DC Armband designed by the Biomedical Microsystems Laboratory in Laval University; a wireless, 10-channel, 1000 sps, dry-electrode, low-cost (∼150 USD) myoelectric armband that also includes a 9-axis inertial measurement unit. The proposed system is compared with the Myo Armband by Thalmic Labs, one of the most popular sEMG acquisition systems. The comparison is made by employing a new offline dataset featuring 22 able-bodied participants performing eleven hand/wrist gestures while wearing the two armbands simultaneously. The 3DC Armband systematically and significantly ( p < 0.05 ) outperforms the Myo Armband, with three different classifiers employing three different input modalities when using ten seconds or more of training data per gesture. This new dataset, alongside the source code, Altium project and 3-D models are made readily available for download within a Github repository. Full article
(This article belongs to the Special Issue EMG Sensors and Applications)
Show Figures

Figure 1

Figure 1
<p>The proposed 3DC Armband. The system and the battery are held in the receptacles identified by 1 and 10 respectively. The label on each part of the armband corresponds to the channels’ order that are recorded for the dataset described in <a href="#sec4-sensors-19-02811" class="html-sec">Section 4</a>.</p>
Full article ">Figure 2
<p>System-level concept of the multichannel wireless sEMG sensor: The sensor is built around a custom 0.13-<math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>m SoC that includes 10× sEMG channels, each of which encompasses a bioamplifier, a <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi mathvariant="sans-serif">Σ</mi> </mrow> </semantics></math> analog-to-digital converter (ADC), and a 4th order decimation filter. The SoC, the nRF24L01+ low-power wireless transceiver, and the ICM-20948 9-axis IMU are interfaced with an MSP430F5328 low-power MCU.</p>
Full article ">Figure 3
<p>(<b>a</b>) Two-sided view of the sEMG sensor with each part identified: The printed circuit board (PCB) has a flexible region to fold the two rigid parts on top of each other to save space. (<b>b</b>) The packaged SoC which is wirebonded directly on a PCB substrate. (<b>c</b>) The system folded in its final position beside a Canadian quarter coin (diameter of 23.88 mm).</p>
Full article ">Figure 4
<p>(<b>a</b>) Analog bandwidth of the bioamplifier (in black), digital bandwidth of the decimation filter (in blue), Myo bandwidth comparison (in orange), and (<b>b</b>) noise spectrum of the bioamplifier. The input referred noise is of 2.5 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>V<math display="inline"><semantics> <msub> <mrow/> <mi>rms</mi> </msub> </semantics></math> over a 500-Hz bandwidth.</p>
Full article ">Figure 5
<p>(<b>A</b>) The system’s receptacle: The bottom of the unit is used to receive the main electrode, while the system is stored inside. A cover slides on to enclose the system. (<b>B</b>) The battery holder: This receptacle is used to house the power source of the armband and, as such, should be placed next to the system’s holder. Once the battery is placed, the cover can then slide on to protect the system. A standard electrode is placed on the bottom of this holder. (<b>C</b>) This holder houses a standard electrode. For the proposed 3DC Armband, eight such receptacles are required.</p>
Full article ">Figure 6
<p>The two different armband configurations (left/right) employed in this work with the 3DC being either above or below the Myo armband with respect to the participant’s wrist. This figure also showcases the wide variety of armband positions recorded in the proposed dataset.</p>
Full article ">Figure 7
<p>The eleven hand/wrist gestures employed in the proposed dataset.</p>
Full article ">Figure 8
<p>Comparison of the signals recorded with the Myo Armband and the proposed 3DC Armband. The <span class="html-italic">x</span>-axis represents time in seconds, while the <span class="html-italic">y</span>-axis is the different channels of the armbands. The three gestures recorded in order are the <span class="html-italic">chuck grip</span>, <span class="html-italic">Open Hand</span>, and <span class="html-italic">Pinch Grip</span>. Note that these signals were not obtained using the <span class="html-italic">Comparison Dataset</span> recording protocol to show a wider array of gestures in a continuous way.</p>
Full article ">Figure 9
<p>The raw ConvNet architecture employing 34,667 parameters. In this figure, <span class="html-italic">Conv</span> refers to <span class="html-italic">Convolution</span> and <span class="html-italic">BN</span> refers to Batch Normalization. While the input represented in this figure is that of the 3DC, the architecture remains the same for all considered systems.</p>
Full article ">Figure 10
<p>The Spectrogram ConvNet architecture employing 95,627 parameters. In this figure, <span class="html-italic">Conv</span> refers to <span class="html-italic">Convolution</span> and <span class="html-italic">BN</span> refers to Batch Normalization. The input represented comes from the 3DC Armband with the channels on the <span class="html-italic">x</span>-axis and the frequency bins on the <span class="html-italic">y</span>-axis. Due to the Myo Armband associated input size, P4 and C5 were removed from the architecture when training on Myo’s data.</p>
Full article ">Figure 11
<p>Comparison between the Myo and the 3DC Armband employing LDA for classification: The number of cycles corresponds to the amount of data employed for training (one <span class="html-italic">cycle</span> equals 5 s of data per gesture). The Wilcoxon Signed Rank test is applied between the Myo and the 3DC Armband. The null hypothesis is that the median difference between pairs of observations (i.e., accuracy from the same participant with the Myo or the 3DC Armband) is zero. The p-value is shown when the null hypothesis is rejected (significant level set at <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>0.05</mn> </mrow> </semantics></math>). The black line represents the standard deviation calculated across all 22 participants.</p>
Full article ">Figure 12
<p>Confusion Matrices for the Myo and the 3DC Armband employing linear discriminant analysis (LDA) for classification and four cycles of training. A lighter color is better.</p>
Full article ">Figure 13
<p>Comparison between the Myo and the 3DC Armband employing Raw ConvNet for classification: The number of cycles corresponds to the amount of data employed for training (one <span class="html-italic">cycle</span> equals 5 s of data per gesture). The Wilcoxon Signed Rank test is applied between the Myo and the 3DC Armband. The null hypothesis is that the median difference between pairs of observations (i.e., accuracy from the same participant with the Myo or the 3DC Armband) is zero. The p-value is shown when the null hypothesis is rejected (significant level set at <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>0.05</mn> </mrow> </semantics></math>). The black line represents the standard deviation calculated across all 22 participants.</p>
Full article ">Figure 14
<p>Confusion Matrices for the Myo and the 3DC Armband employing the <span class="html-italic">Raw</span> ConvNet for classification and four cycles of training. A lighter color is better.</p>
Full article ">Figure 15
<p>Comparison between the Myo and the 3DC Armband employing the <span class="html-italic">Spectrogram</span> ConvNet for classification: The number of cycles corresponds to the amount of data employed for training (one <span class="html-italic">cycle</span> equals 5 s of data per gesture). The Wilcoxon Signed Rank test is applied between the Myo and the 3DC Armband. The null hypothesis is that the median difference between pairs of observations (i.e., accuracy from the same participant with the Myo or the 3DC Armband) is zero. The p-value is shown when the null hypothesis is rejected (significant level set at <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>0.05</mn> </mrow> </semantics></math>). The black line represents the standard deviation calculated across all 22 participants.</p>
Full article ">Figure 16
<p>Confusion Matrices for the Myo and the 3DC Armband employing the <span class="html-italic">Spectrogram</span> ConvNet for classification and four cycles of training. A lighter color is better.</p>
Full article ">Figure A1
<p>Confusion Matrices for the Myo and the 3DC Armband employing LDA for classification and one cycle of training. Lighter is better.</p>
Full article ">Figure A2
<p>Confusion Matrices for the Myo and the 3DC Armband employing LDA for classification and two cycles of training. Lighter is better.</p>
Full article ">Figure A3
<p>Confusion Matrices for the Myo and the 3DC Armband employing LDA for classification and three cycles of training. Lighter is better.</p>
Full article ">Figure A4
<p>Confusion Matrices for the Myo and the 3DC Armband employing the <span class="html-italic">Raw</span> ConvNet for classification and one cycle of training. Lighter is better.</p>
Full article ">Figure A5
<p>Confusion Matrices for the Myo and the 3DC Armband employing the <span class="html-italic">Raw</span> ConvNet for classification and two cycles of training. Lighter is better.</p>
Full article ">Figure A6
<p>Confusion Matrices for the Myo and the 3DC Armband employing the <span class="html-italic">Raw</span> ConvNet for classification and three cycles of training. Lighter is better.</p>
Full article ">Figure A7
<p>Confusion Matrices for the Myo and the 3DC Armband employing the <span class="html-italic">Spectrogram</span> ConvNet for classification and one cycle of training. Lighter is better.</p>
Full article ">Figure A8
<p>Confusion Matrices for the Myo and the 3DC Armband employing the <span class="html-italic">Spectrogram</span> ConvNet for classification and two cycles of training. Lighter is better.</p>
Full article ">Figure A9
<p>Confusion Matrices for the Myo and the 3DC Armband employing the <span class="html-italic">Spectrogram</span> ConvNet for classification and three cycles of training. Lighter is better.</p>
Full article ">
16 pages, 4459 KiB  
Article
An Implantable Inductive Near-Field Communication System with 64 Channels for Acquisition of Gastrointestinal Bioelectrical Activity
by Amir Javan-Khoshkholgh and Aydin Farajidavar
Sensors 2019, 19(12), 2810; https://doi.org/10.3390/s19122810 - 24 Jun 2019
Cited by 18 | Viewed by 5240
Abstract
High-resolution (HR) mapping of the gastrointestinal (GI) bioelectrical activity is an emerging method to define the GI dysrhythmias such as gastroparesis and functional dyspepsia. Currently, there is no solution available to conduct HR mapping in long-term studies. We have developed an implantable 64-channel [...] Read more.
High-resolution (HR) mapping of the gastrointestinal (GI) bioelectrical activity is an emerging method to define the GI dysrhythmias such as gastroparesis and functional dyspepsia. Currently, there is no solution available to conduct HR mapping in long-term studies. We have developed an implantable 64-channel closed-loop near-field communication system for real-time monitoring of gastric electrical activity. The system is composed of an implantable unit (IU), a wearable unit (WU), and a stationary unit (SU) connected to a computer. Simultaneous data telemetry and power transfer between the IU and WU is carried out through a radio-frequency identification (RFID) link operating at 13.56 MHz. Data at the IU are encoded according to a self-clocking differential pulse position algorithm, and load shift keying modulated with only 6.25% duty cycle to be back scattered to the WU over the inductive path. The retrieved data at the WU are then either transmitted to the SU for real-time monitoring through an ISM-band RF transceiver or stored locally on a micro SD memory card. The measurement results demonstrated successful data communication at the rate of 125 kb/s when the distance between the IU and WU is less than 5 cm. The signals recorded in vitro at IU and received by SU were verified by a graphical user interface. Full article
(This article belongs to the Special Issue Near-Field Communication (NFC) Sensors)
Show Figures

Figure 1

Figure 1
<p>The block diagram of the near-field communication system for simultaneous wireless power transfer and telemetric acquisition of the gastric bioelectrical activity is shown.</p>
Full article ">Figure 2
<p>The schematic of the developed encoding algorithm is shown. (<b>a</b>) High and low digital logic values: “0” and “1”, (<b>b</b>) IEEE 802.3 standard Manchester encoding with 50% duty cycle, and (<b>c</b>) developed differential pulse position encoding with only 0.5 µs high-pulse width.</p>
Full article ">Figure 3
<p>The circuit schematic for the analysis of load-shift keying data modulation is shown. L<sub>P</sub> shows the primary coil matched to 50 Ω with C<sub>P1</sub> and C<sub>P2</sub>, and L<sub>S</sub> presents the secondary coil with the resonant capacitor of C<sub>S</sub>.</p>
Full article ">Figure 4
<p>The schematic of the load-shift keying data modulation is shown. (<b>a</b>) The digital sequences of “0”s and “1”s are encoded by differential pulse position algorithm at the implantable unit, and (<b>b</b>) the encoded data modulated over 13.56 MHz carrier signal, can be seen by the envelope detector at the wearable unit. It only presents the concept and does not take into consideration instantaneous possible changes of the transmitting power.</p>
Full article ">Figure 5
<p>The detailed block diagram of the system consisting of the implantable, wearable and stationary units for near-field communication and wireless power transfer is presented.</p>
Full article ">Figure 6
<p>The implemented system for the validation of the 64-channel near-field communication signal acquisition and wireless power transfer, consisting of (<b>a</b>) the stationary unit connected to computer, (<b>b</b>) the wearable unit connected to a LiPo battery, (<b>c</b>) the implantable unit and (<b>d</b>) the transmitter coil. The inset shows the side view of the holder used to adjust various distances and angles between the primary and secondary coils.</p>
Full article ">Figure 7
<p>The impedances of (left) the wearable unit’s transmitter coil with the 50 Ω capacitive matching network equal to (49.5 + j0.3) Ω, (right) the impedance of the implantable unit’s receiver coil resonant LC network equal to (16.5 + j0.0041) kΩ, both measured at the RFID carrier frequency of 13.56 MHz.</p>
Full article ">Figure 8
<p>(<b>a</b>) and (<b>b</b>) show the received power and efficiency when the distance between the TX and RX coils is changed from 2 cm to 5 cm for air and raw chicken. (<b>c</b>) and (<b>d</b>) show the received power and efficiency when the alignment between the TX and RX coils is changed from 0° to 60° for air and raw chicken.</p>
Full article ">Figure 9
<p>The benchtop setup for the verification of the near field communication recording is shown. A 5-min sample of slow waves recorded in vivo was loaded into a multifunction data acquisition device (DAQ USB-6218, National Instrument) and streamed into the implantable unit through saline solution. The signals were then recorded, sent to the wearable unit through the inductive link and wirelessly transmitted to the stationary unit.</p>
Full article ">Figure 10
<p>(<b>a</b>) The data encoded according to differential pulse position algorithm at the implantable unit, (<b>b</b>) the voltage of the secondary coil which drops to zero when the back-telemetry circuit sends a high-pulse of data, (<b>c</b>) the voltage of the primary coil which slightly increases when there is a change of impedance at the secondary coil, and (<b>d</b>) the demodulated data at the output of the RFID reader’s envelope detector. The time per division on the x-axis and the voltage per division on the y-axis for all four signals are 10 µs and 2 V, respectively.</p>
Full article ">Figure 11
<p>Signals received in the GUI is shown. <span class="html-italic">X</span> and <span class="html-italic">Y</span> axes are time (0 to 160 s) and amplitude (0 V to 3.3 V), respectively. Only two of 64 channels are shown, here. Eight slow-wave peaks in a time window of 160 s translates to 3 cycles per minute.</p>
Full article ">
47 pages, 3820 KiB  
Review
A Literature Review: Geometric Methods and Their Applications in Human-Related Analysis
by Wenjuan Gong, Bin Zhang, Chaoqi Wang, Hanbing Yue, Chuantao Li, Linjie Xing, Yu Qiao, Weishan Zhang and Faming Gong
Sensors 2019, 19(12), 2809; https://doi.org/10.3390/s19122809 - 23 Jun 2019
Cited by 1 | Viewed by 5019
Abstract
Geometric features, such as the topological and manifold properties, are utilized to extract geometric properties. Geometric methods that exploit the applications of geometrics, e.g., geometric features, are widely used in computer graphics and computer vision problems. This review presents a literature review on [...] Read more.
Geometric features, such as the topological and manifold properties, are utilized to extract geometric properties. Geometric methods that exploit the applications of geometrics, e.g., geometric features, are widely used in computer graphics and computer vision problems. This review presents a literature review on geometric concepts, geometric methods, and their applications in human-related analysis, e.g., human shape analysis, human pose analysis, and human action analysis. This review proposes to categorize geometric methods based on the scope of the geometric properties that are extracted: object-oriented geometric methods, feature-oriented geometric methods, and routine-based geometric methods. Considering the broad applications of deep learning methods, this review also studies geometric deep learning, which has recently become a popular topic of research. Validation datasets are collected, and method performances are collected and compared. Finally, research trends and possible research topics are discussed. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>Overall view of this paper. This review is mainly composed of four modules: geometric methods for generic objects; geometric method-based human-related analysis; geometric deep learning for human-related analysis; and generalized geometrics for human-related analysis. Each module has its subsections, each of which is a class of methods based on its categorization standards. HSA, human shape analysis.</p>
Full article ">Figure 2
<p>Main components of <a href="#sec2-sensors-19-02809" class="html-sec">Section 2</a>. This section is composed of four modules: set theory concepts; topology concepts developed from set theory; algebraic topology concepts (topology plus algebra); and manifold concepts (a topology that locally resembles Euclidean spaces).</p>
Full article ">Figure 3
<p>Radial projection from a tetrahedron <span class="html-italic">T</span> onto a sphere with center <math display="inline"><semantics> <mover accent="true"> <mi>T</mi> <mo>^</mo> </mover> </semantics></math>. An example is shown as follows: a point <span class="html-italic">x</span> on a surface of the tetrahedron projected onto its corresponding point <math display="inline"><semantics> <mrow> <mi>π</mi> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> </semantics></math> on the sphere with the radial projection function <math display="inline"><semantics> <mi>π</mi> </semantics></math>.</p>
Full article ">Figure 4
<p>An example of creating a quotient space by gluing. Gluing the boundary of a circle onto a single point. The two-sphere <math display="inline"><semantics> <msup> <mi>S</mi> <mn>2</mn> </msup> </semantics></math> is obtained by gluing the circle <math display="inline"><semantics> <msup> <mi>S</mi> <mn>1</mn> </msup> </semantics></math> to a single point.</p>
Full article ">Figure 5
<p>An example of a coordinate chart. The figure illustrates an example of a coordinate chart from <span class="html-italic">U</span> to <math display="inline"><semantics> <mover accent="true"> <mi>U</mi> <mo>˜</mo> </mover> </semantics></math>.</p>
Full article ">Figure 6
<p>Illustration of a tangent space. <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi>x</mi> </msub> <mi>M</mi> </mrow> </semantics></math> is the tangent space of the manifold <math display="inline"><semantics> <mi mathvariant="script">M</mi> </semantics></math> at point <span class="html-italic">x</span>.</p>
Full article ">Figure 7
<p>Illustration of a tangent bundle of a manifold. The figure illustrates the tangent bundle of a circle (<b>a</b>) viewed from the side and (<b>b</b>) viewed from the top or bottom.</p>
Full article ">Figure 8
<p>Examples of parallel transports. The figure illustrates two examples of parallel transports under Levi–Civita connections on four sampling positions. The transport on the left side is given by the metric <math display="inline"><semantics> <mrow> <mi>d</mi> <msup> <mi>s</mi> <mn>2</mn> </msup> <mo>=</mo> <mi>d</mi> <msup> <mi>r</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>r</mi> <mn>2</mn> </msup> <mi>d</mi> <msup> <mi>θ</mi> <mn>2</mn> </msup> </mrow> </semantics></math>. The transport on the right side is given by the metric <math display="inline"><semantics> <mrow> <mi>d</mi> <msup> <mi>s</mi> <mn>2</mn> </msup> <mo>=</mo> <mi>d</mi> <msup> <mi>r</mi> <mn>2</mn> </msup> <mo>+</mo> <mi>d</mi> <msup> <mi>θ</mi> <mn>2</mn> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p>Illustration of the exponential and the logarithmic maps. The example point of <span class="html-italic">g</span> on the manifold <math display="inline"><semantics> <mi mathvariant="script">M</mi> </semantics></math> is mapped to a point on the tangent plane <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi>e</mi> </msub> <mi>M</mi> </mrow> </semantics></math> using a logarithmic map <math display="inline"><semantics> <mrow> <mi>L</mi> <mi>o</mi> <msub> <mi>g</mi> <mi>M</mi> </msub> <mrow> <mo>(</mo> <mi>g</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>. The exponential map <math display="inline"><semantics> <mrow> <mi>e</mi> <mi>x</mi> <msub> <mi>p</mi> <mi>M</mi> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> is the reverse of the logarithmic map.</p>
Full article ">Figure 10
<p>Illustration of a topological data analysis (TDA) pipeline. (<b>a</b>) A 3D object (hand) represented as a point cloud. (<b>b</b>) A filter value is applied to the point cloud, and the object is now colored by the values of the filter functions. (<b>c</b>) The data points are binned into overlapping groups. (<b>d</b>) Each bin is clustered and a center of the cluster is calculated, and a network is built by connecting the cluster center sequentially. The figure is originally from [<a href="#B79-sensors-19-02809" class="html-bibr">79</a>].</p>
Full article ">Figure 11
<p>Three kernel-based distance visualized on human models. Visualized distances between the reference point (pointed with red arrows in the first column of each sub-group) and other points on the model. On the left, the reference point is the right writs, in the middle the belly, and on the right the chest. The first row shows the results from the heat kernel, the second row shows the results form the wave kernel, and the third row shows the results of the proposed kernel in [<a href="#B95-sensors-19-02809" class="html-bibr">95</a>]. Dark blue shows small distances; red represents large distances. ©2014 IEEE. Reprinted, with permission, from R. Litman, and A. M. Bronstein, Learning Spectral Descriptors for Deformable Shape Correspondence, <span class="html-italic">in IEEE Trans. Pattern Anal. Mach. Intell.</span>, 2014, <span class="html-italic">36</span>, 170–180.</p>
Full article ">Figure 12
<p>Heat kernel signature (HKS), wave kernel signature (WKS), and learned spectral descriptors for point matching between human models. Correspondences computed on TOSCA shapes with geodesic distance distortion below <math display="inline"><semantics> <mrow> <mn>10</mn> <mo>%</mo> </mrow> </semantics></math> of the shape diameter using the heat kernel signature, wave kernel signature, and learned spectral descriptor (from left to right) [<a href="#B95-sensors-19-02809" class="html-bibr">95</a>]. ©2014 IEEE. Reprinted, with permission, from R. Litman, and A. M. Bronstein, Learning Spectral Descriptors for Deformable Shape Correspondence, <span class="html-italic">in IEEE Trans. Pattern Anal. Mach. Intell.</span>, 2014, <span class="html-italic">36</span>, 170–180.</p>
Full article ">Figure 13
<p>An action trajectory in R3DGfeature space. One point on the action trajectory is an R3DG feature of a pose [<a href="#B103-sensors-19-02809" class="html-bibr">103</a>]. Reprinted from Comput. Vis. Image Underst., Vol. 152, R. Vemulapalli, F. Arrate, and R. Chellappa, R3DG features: Relative 3D geometry-based skeletal representations for human action recognition, 155–166, Copyright 2016, with permission from Elsevier.</p>
Full article ">Figure 14
<p>Illustrations of the differences between extrinsic CNN and intrinsic CNN. Intrinsic methods (right) work on the manifold rather than its Euclidean realization. The figure is originally from [<a href="#B120-sensors-19-02809" class="html-bibr">120</a>]. Reproduced with permission from Michael Bronstein, NIPS Proceedings; published by Neural Information Processing Systems Foundation, Inc., 2016.</p>
Full article ">Figure 15
<p>A Training Mesh Example with Its Multiple segmentations. To ensure smooth descriptors, the authors in [<a href="#B126-sensors-19-02809" class="html-bibr">126</a>] defined a classification problem for multiple segmentations of the human body. Points on the boundary might be assigned to nearby classes in different segmentation. ©2016 IEEE. Reprinted, with permission, from L. Wei, Q. Huang, D. Ceylan, E. Vouga, and H. Li, Dense Human Body Correspondences Using Convolutional Networks, <span class="html-italic">in Proceedings of the European Conference on Computer Vision</span>, Amsterdam, The Netherlands, 11–14 October 2016, 1544–1553.</p>
Full article ">Figure 16
<p>Spatial construction of geometric CNN. K(K = 2 in the example) scales are considered. <math display="inline"><semantics> <msub> <mi mathvariant="sans-serif">Ω</mi> <mi>k</mi> </msub> </semantics></math> is defined as a partition of <math display="inline"><semantics> <msub> <mi mathvariant="sans-serif">Ω</mi> <mrow> <mi>k</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> </semantics></math> into <math display="inline"><semantics> <msub> <mi>d</mi> <mi>k</mi> </msub> </semantics></math> clusters. Each layer of the network transforms a <math display="inline"><semantics> <msub> <mi>f</mi> <mrow> <mi>k</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> </semantics></math>-dimensional signal indexed by <math display="inline"><semantics> <msub> <mi mathvariant="sans-serif">Ω</mi> <mrow> <mi>k</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> </semantics></math> into a <math display="inline"><semantics> <msub> <mi>f</mi> <mi>k</mi> </msub> </semantics></math>-dimensional signal indexed by <math display="inline"><semantics> <msub> <mi mathvariant="sans-serif">Ω</mi> <mi>k</mi> </msub> </semantics></math>. The figure is originally from [<a href="#B127-sensors-19-02809" class="html-bibr">127</a>].</p>
Full article ">Figure 17
<p>Visualized local geodesic polar coordinates. Left: examples of local geodesic patches, center and right: examples of angular weights and radial weights, <math display="inline"><semantics> <msub> <mi>v</mi> <mi>θ</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>v</mi> <mi>ρ</mi> </msub> </semantics></math>, respectively (red denotes larger weights) [<a href="#B128-sensors-19-02809" class="html-bibr">128</a>]. ©2015 IEEE. Reprinted, with permission, from J. Masci, D. Boscaini, M. M. Bronstein, and P. Vandergheynst, Geodesic Convolutional Neural Networks on Riemannian Manifolds, <span class="html-italic">in Proceedings of the IEEE Workshop on 3D Representation and Recognition</span>, Santiago, Chile, 17 December 2015, 832–840.</p>
Full article ">Figure 18
<p>Architecture of the proposed dRNNmodel. In the memory cell, the input gate <math display="inline"><semantics> <msub> <mi mathvariant="bold">i</mi> <mi>t</mi> </msub> </semantics></math> and the forget gate <math display="inline"><semantics> <msub> <mi mathvariant="bold">f</mi> <mi>t</mi> </msub> </semantics></math> are controlled by the derivative of states (DoS) <math display="inline"><semantics> <mfrac> <mrow> <msup> <mi>d</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> </msup> <msub> <mi>s</mi> <mrow> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> </mrow> <mrow> <mi>d</mi> <msup> <mi>t</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> </msup> </mrow> </mfrac> </semantics></math> at <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> </semantics></math>, and the output gate <math display="inline"><semantics> <msub> <mi mathvariant="bold">o</mi> <mi>t</mi> </msub> </semantics></math> is controlled by the DoS <math display="inline"><semantics> <mfrac> <mrow> <msup> <mi>d</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> </msup> <msub> <mi>s</mi> <mi>t</mi> </msub> </mrow> <mrow> <mi>d</mi> <msup> <mi>t</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> </msup> </mrow> </mfrac> </semantics></math> at <span class="html-italic">t</span> [<a href="#B141-sensors-19-02809" class="html-bibr">141</a>]. ©2015 IEEE. Reprinted, with permission, from V. Veeriah, N. Zhuang, and G. Qi, Differential Recurrent Neural Networks for Action Recognition, <span class="html-italic">in Proceedings of the IEEE International Conference on Computer Vision</span>, Región Metropolitana, Chile, 11–18 December 2015, 4041–4049.</p>
Full article ">Figure 19
<p>Examples from the Kidsdataset. The figure is originally from [<a href="#B94-sensors-19-02809" class="html-bibr">94</a>].</p>
Full article ">Figure 20
<p>Contents from the H3D dataset. The figure is originally from the website [<a href="#B149-sensors-19-02809" class="html-bibr">149</a>].</p>
Full article ">Figure 21
<p>Examples from the Partial Shape Dataset. The figure is originally from the website [<a href="#B18-sensors-19-02809" class="html-bibr">18</a>].</p>
Full article ">Figure 22
<p>Examples from the RGB-D People Dataset. The figure shows the color image data (<b>a</b>) and the dense depth data (<b>b</b>) of three examplar frames. The figure is originally from the website [<a href="#B150-sensors-19-02809" class="html-bibr">150</a>,<a href="#B151-sensors-19-02809" class="html-bibr">151</a>].</p>
Full article ">Figure 23
<p>Contents from the RGB-D Human Tracking Dataset. The figure is originally from the RGB-D Human Tracking Dataset website [<a href="#B152-sensors-19-02809" class="html-bibr">152</a>].</p>
Full article ">Figure 24
<p>Sample Images from the URfall dataset. The figure was captured from the demo video on the UR Fall Dataset website [<a href="#B154-sensors-19-02809" class="html-bibr">154</a>].</p>
Full article ">Figure 25
<p>The figure shows exemplary point-to-point maps from one human body model to another. The overall performance of the proposed geometric method (right) is working better than the compared SHOT(left) method on the entire shape [<a href="#B83-sensors-19-02809" class="html-bibr">83</a>]. Republished with permission of ACM, from ACM Trans. Graph., E. Corman and M. Ovsjanikov, Vol. 38, 2019; permission conveyed through Copyright Clearance Center, Inc.</p>
Full article ">Figure 26
<p>The figure shows exemplary results on partial symmetries of human body models. The partial human body models are obtained by removing certain body parts, and the removed body parts are marked in semitransparent dark gray. The experiments are carried out under various regularization coefficients (the horizontal axis) and various body part sizes (the vertical axis). Symmetric body parts are marked with the same color. Discarded body parts are marked in light gray [<a href="#B84-sensors-19-02809" class="html-bibr">84</a>]. Reprinted by permission from SPRINGER NATURE: Springer Nature, Int. J. Comput. Vis., Full and Partial Symmetries of Non-rigid Shapes, D. Raviv, A. M. Bronstein, M. M. Bronstein, R. Kimmel, Copyright 2010.</p>
Full article ">Figure 27
<p>The figure shows exemplary shape recognition results. The first column denotes the query shape, and the second to the fourth columns show the three closest matches [<a href="#B86-sensors-19-02809" class="html-bibr">86</a>]. Reprinted from Pattern Recognition, Vol. 45, D. Smeets, J. Hermans, D. Vandermeulen, P. Suetens, Isometric Deformation Invariant 3D Shape Recognition, 2817–2831, Copyright 2012, with permission from Elsevier.</p>
Full article ">Figure 28
<p>The proposed method produces a good approximation to the full simulation while being 60-times faster. The figure is originally from [<a href="#B102-sensors-19-02809" class="html-bibr">102</a>]. Reproduced with permission from Jernej Barbic, ACM Transactions on Graphics; published by ACM Digital Library, 2016.</p>
Full article ">Figure A1
<p>Vector bundle illustration (by Jakob.scholbach at English Wikipedia, CC BY-SA 3.0, <a href="https://commons.wikimedia.org/w/index.php?curid=6082417" target="_blank">https://commons.wikimedia.org/w/index.php?curid=6082417</a>).</p>
Full article ">
18 pages, 4480 KiB  
Article
Validation of Electroencephalographic Recordings Obtained with a Consumer-Grade, Single Dry Electrode, Low-Cost Device: A Comparative Study
by Héctor Rieiro, Carolina Diaz-Piedra, José Miguel Morales, Andrés Catena, Samuel Romero, Joaquin Roca-Gonzalez, Luis J. Fuentes and Leandro L. Di Stasi
Sensors 2019, 19(12), 2808; https://doi.org/10.3390/s19122808 - 23 Jun 2019
Cited by 41 | Viewed by 8854
Abstract
The functional validity of the signal obtained with low-cost electroencephalography (EEG) devices is still under debate. Here, we have conducted an in-depth comparison of the EEG-recordings obtained with a medical-grade golden-cup electrodes ambulatory device, the SOMNOwatch + EEG-6, vs those obtained with a [...] Read more.
The functional validity of the signal obtained with low-cost electroencephalography (EEG) devices is still under debate. Here, we have conducted an in-depth comparison of the EEG-recordings obtained with a medical-grade golden-cup electrodes ambulatory device, the SOMNOwatch + EEG-6, vs those obtained with a consumer-grade, single dry electrode low-cost device, the NeuroSky MindWave, one of the most affordable devices currently available. We recorded EEG signals at Fp1 using the two different devices simultaneously on 21 participants who underwent two experimental phases: a 12-minute resting state task (alternating two cycles of closed/open eyes periods), followed by 60-minute virtual-driving task. We evaluated the EEG recording quality by comparing the similarity between the temporal data series, their spectra, their signal-to-noise ratio, the reliability of EEG measurements (comparing the closed eyes periods), as well as their blink detection rate. We found substantial agreement between signals: whereas, qualitatively, the NeuroSky MindWave presented higher levels of noise and a biphasic shape of blinks, the similarity metric indicated that signals from both recording devices were significantly correlated. While the NeuroSky MindWave was less reliable, both devices had a similar blink detection rate. Overall, the NeuroSky MindWave is noise-limited, but provides stable recordings even through long periods of time. Furthermore, its data would be of adequate quality compared to that of conventional wet electrode EEG devices, except for a potential calibration error and spectral differences at low frequencies. Full article
(This article belongs to the Collection Wearable and Unobtrusive Biomedical Monitoring)
Show Figures

Figure 1

Figure 1
<p>Experiment structure. (<b>a</b>) EEG recording configuration. Red elements and arrows indicate the electrodes used by the SOMNOwatch device. Blue elements and arrows indicate electrodes used by the MindWave device. (<b>b</b>) UML (Unified Modeling Language) activity diagram for the implementation and data acquisition of the experiment. The session started with the placement of the electrodes. After checking the signal quality, EEG data collection started. The experiment started with the resting state task (~15 min) structured as two cycles of task 1 (closed eyes, 3 min each) and task 2 (open eyes, 3 min each). The experiment started with either task 1 or task 2, as the order was random for each participant. Afterwards, the driving task (task 3) started (a 60-minute driving session without breaks). MindWave data was visualized in real time (RTD visualization) for all tasks. Once the three tasks finished, the session ended.</p>
Full article ">Figure 2
<p>Data processing. UML (Unified Modeling Language) activity diagram for the experimental data processing. The process started by checking the device type. For the MindWave, we read the data file and downsampled it to 256 Hz; for the SOMNOwatch, we just read the data file (already at 256 Hz). Next, we applied an Order 10 Chebychev type II filter, followed by a signal alignment. We segmented the data into the three tasks (closed eyes, open eyes, and driving tasks). For each task, we detected blink artifacts, calculated the similarity measure, and removed high-amplitude artifacts, to finally compute the signal-to-noise ratio (SNR) and to perform the spectral estimation. Note that rectangles indicate processes, diamonds indicate decisions, and parallelograms indicate output data.</p>
Full article ">Figure 3
<p>Differences between recording sites (Fp1 and AF3) and reference sites (mastoid and lobe) when recorded with the SOMNOwatch. (<b>a</b>) EEG recording configuration. Red arrows indicate the four electrodes’ placements of the SOMNOwatch used to compare the data. (<b>b</b>) Linear regression model for Fp1 and AF3 when recorded with the SOMNOwatch for five participants. The cloud of points shows the data for each subject first centered (by subtracting the average) and divided by its standard deviation, while the solid line represents the result of a linear regression of the form AF3 = b + g × Fp1. The numerical results for the regression and the correspondent determination coefficient are shown in the graph inset. (<b>c</b>) Linear regression model for Mastoid and Lobe references when recorded with the SOMNOwatch for five participants. In this case, the solid line represents the result of a linear regression of the form Lobe = b + g × Mastoid.</p>
Full article ">Figure 4
<p>Comparison of temporal data series. (<b>a</b>) Left panel shows example traces of a simultaneous recording in one participant. The different noise levels and different shape of blinks are easy to observe. (<b>b</b>) The right panel shows the similarity measures (open circles) between the recordings for each participant and each of the three separate tasks (closed eyes, open eyes, driving task), as compared to a baseline value (dotted lines at the bottom). The values for each subject are displaced on the horizontal axis for representation purposes.</p>
Full article ">Figure 5
<p>Spectral comparison between recording devices. (<b>a</b>) Spectrograms of the simultaneous recordings, in a single participant, with the two acquisition devices. The different tasks (open eyes, closed eyes, and the driving task) are delineated in the temporal axis. While the recordings are qualitatively similar, a higher level of noise can be appreciated in the MindWave data. (<b>b</b>) Power spectral densities obtained from the closed eyes (<b>left</b>), open eyes (<b>center</b>), and driving (<b>right</b>) tasks. Thin lines show individual participants, thick lines the average result. The devices differed in their response at lower frequencies, as evidenced by the MindWave peak around 3Hz.</p>
Full article ">Figure 6
<p>Waveform and rate of detected blinks. Average waveforms and blink detection rate for each individual participant (<span class="html-italic">N</span> = 21, thin lines) and the population mean (thick lines). (<b>a</b>) Average waveform, with the timepoint of crossing the amplitude threshold (see Methods section) aligned to zero. Amplitudes of individual artifacts are normalized to a maximum value of 1. The different shape of blinks is apparent. (<b>b</b>) Blink detection rate obtained from the closed eyes (<b>left</b>), open eyes (<b>center</b>), and driving (<b>right</b>) tasks. Thin lines show individual participants and thick lines are the average result.</p>
Full article ">Figure 7
<p>Signal-to-noise ratio (SNR) estimation. (<b>a</b>) Additive white noise model employed for the estimation of the SNR. The physiological signal <span class="html-italic">x</span> is filtered by the impulse response, resulting in filtered signal <span class="html-italic">y</span>, of both recording devices, at which point white noise (<span class="html-italic">w</span>) is added, resulting in the recorded signals. The SNR is defined as the ratio between the power of the filtered signal and the power of noise. (<b>b</b>) Results for each participant (thin lines) and average (thick line) the closed eyes (<b>left</b>), open eyes (<b>center</b>), and driving (<b>right</b>) tasks. SNR for the SOMNOwatch is on average 2 dB above that of the MindWave.</p>
Full article ">Figure 8
<p>Normalized alpha waves as simultaneously captured by both devices during periods of closed eyes and open eyes. Alpha waves were separated from the rest of the signal using a 10th-order Chebychev filter. The alpha amplitude is clearly increased in closed eyes periods.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop