Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 20, October-1
Previous Issue
Volume 20, September-1
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 20, Issue 18 (September-2 2020) – 432 articles

Cover Story (view full-size image): In 2019, the Canadian Space Agency initiated the development of a dedicated wildfire monitoring satellite mission (WildFireSat). This mission will support operational wildfire management, smoke and air quality forecasting, and wildfire carbon emissions reporting. This study introduces the backward traceable approach adopted by the User and Science Team to define requirements for the mission based on Canadian wildfire management needs. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
25 pages, 5266 KiB  
Article
Pollution Weather Prediction System: Smart Outdoor Pollution Monitoring and Prediction for Healthy Breathing and Living
by Sharnil Pandya, Hemant Ghayvat, Anirban Sur, Muhammad Awais, Ketan Kotecha, Santosh Saxena, Nandita Jassal and Gayatri Pingale
Sensors 2020, 20(18), 5448; https://doi.org/10.3390/s20185448 - 22 Sep 2020
Cited by 24 | Viewed by 6692
Abstract
Air pollution has been a looming issue of the 21st century that has also significantly impacted the surrounding environment and societal health. Recently, previous studies have conducted extensive research on air pollution and air quality monitoring. Despite this, the fields of air pollution [...] Read more.
Air pollution has been a looming issue of the 21st century that has also significantly impacted the surrounding environment and societal health. Recently, previous studies have conducted extensive research on air pollution and air quality monitoring. Despite this, the fields of air pollution and air quality monitoring remain plagued with unsolved problems. In this study, the Pollution Weather Prediction System (PWP) is proposed to perform air pollution prediction for outdoor sites for various pollution parameters. In the presented research work, we introduced a PWP system configured with pollution-sensing units, such as SDS021, MQ07-CO, NO2-B43F, and Aeroqual Ozone (O3). These sensing units were utilized to collect and measure various pollutant levels, such as PM2.5, PM10, CO, NO2, and O3, for 90 days at Symbiosis International University, Pune, Maharashtra, India. The data collection was carried out between the duration of December 2019 to February 2020 during the winter. The investigation results validate the success of the presented PWP system. In the conducted experiments, linear regression and artificial neural network (ANN)-based AQI (air quality index) predictions were performed. Furthermore, the presented study also found that the customized linear regression methodology outperformed other machine-learning methods, such as linear, ridge, Lasso, Bayes, Huber, Lars, Lasso-lars, stochastic gradient descent (SGD), and ElasticNet regression methodologies, and the customized ANN regression methodology used in the conducted experiments. The overall AQI values of the air pollutants were calculated based on the summation of the AQI values of all the presented air pollutants. In the end, the web and mobile interfaces were developed to display air pollution prediction values of a variety of air pollutants. Full article
(This article belongs to the Special Issue Smart Assisted Living)
Show Figures

Figure 1

Figure 1
<p>Pollution weather prediction (PWP) system sensing unit arrangements. (<b>a</b>) NodeMCU microcontroller unit. (<b>b</b>) SDSO21 (PM2.5 + PM10) sensing unit. (<b>c</b>) M.Q. 07 monitoring carbon monoxide sensing unit. (<b>d</b>) NO<sub>2</sub> sensing unit. (<b>e</b>) O<sub>3</sub> sensing unit.</p>
Full article ">Figure 2
<p>The layered architecture of the PWP system. IoT: Internet of Things.</p>
Full article ">Figure 3
<p>The PWP system architecture.</p>
Full article ">Figure 4
<p>A detailed communication workflow of a PWP communication layer.</p>
Full article ">Figure 5
<p>Linear relationship representations of a variety of pollutants: (<b>a</b>) nitrogen-dioxide (NO<sub>2</sub>), O<sub>3</sub> (1 h), O<sub>3</sub> (4 h) (<b>b</b>) PM10, and carbon monoxide (CO).</p>
Full article ">Figure 6
<p>(<b>a</b>) A representation of the relationship between pollutants vs. time. (<b>b</b>) A variability analysis representation of the air pollutants PM2.5, PM10, CO, NO<sub>2</sub>, and O<sub>3</sub>.</p>
Full article ">Figure 7
<p>Descendant graph of <math display="inline"><semantics> <mrow> <mi>j</mi> <mo> </mo> <mrow> <mo>(</mo> <mrow> <mi>σ</mi> <mo> </mo> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>The customized artificial neural network (ANN) model of a PWP system.</p>
Full article ">Figure 9
<p>Linear regression and ANN-based comparative analysis of the air pollutants: (<b>a</b>) PM2.5, (<b>b</b>) PM10, (<b>c</b>) CO, (<b>d</b>) NO<sub>2</sub>, (<b>e</b>) O<sub>3</sub> (1 h), and (<b>f</b>) O<sub>3</sub> (4 h). AQI: air quality index.</p>
Full article ">Figure 9 Cont.
<p>Linear regression and ANN-based comparative analysis of the air pollutants: (<b>a</b>) PM2.5, (<b>b</b>) PM10, (<b>c</b>) CO, (<b>d</b>) NO<sub>2</sub>, (<b>e</b>) O<sub>3</sub> (1 h), and (<b>f</b>) O<sub>3</sub> (4 h). AQI: air quality index.</p>
Full article ">Figure 9 Cont.
<p>Linear regression and ANN-based comparative analysis of the air pollutants: (<b>a</b>) PM2.5, (<b>b</b>) PM10, (<b>c</b>) CO, (<b>d</b>) NO<sub>2</sub>, (<b>e</b>) O<sub>3</sub> (1 h), and (<b>f</b>) O<sub>3</sub> (4 h). AQI: air quality index.</p>
Full article ">Figure 9 Cont.
<p>Linear regression and ANN-based comparative analysis of the air pollutants: (<b>a</b>) PM2.5, (<b>b</b>) PM10, (<b>c</b>) CO, (<b>d</b>) NO<sub>2</sub>, (<b>e</b>) O<sub>3</sub> (1 h), and (<b>f</b>) O<sub>3</sub> (4 h). AQI: air quality index.</p>
Full article ">Figure 10
<p>A representation of a PWP system. (<b>a</b>) Web interface, (<b>b</b>) mobile interface, and (<b>c</b>) date-wise pollution prediction report.</p>
Full article ">
22 pages, 7466 KiB  
Article
The Impact Analysis of Land Features to JL1-3B Nighttime Light Data at Parcel Level: Illustrated by the Case of Changchun, China
by Fengyan Wang, Kai Zhou, Mingchang Wang and Qing Wang
Sensors 2020, 20(18), 5447; https://doi.org/10.3390/s20185447 - 22 Sep 2020
Cited by 8 | Viewed by 2946
Abstract
Nighttime lights (NTL) create a unique footprint left by human activities, which can reflect the economic index and demographic characteristics of a country or region to some extent. It is of great significance to explore the impact of land features related to social–economic [...] Read more.
Nighttime lights (NTL) create a unique footprint left by human activities, which can reflect the economic index and demographic characteristics of a country or region to some extent. It is of great significance to explore the impact of land features related to social–economic indexes to NTL intensity in urban areas. At present, there are few studies on the impact factors of high-resolution NTL remote sensing data to analyze the influence of NTL intensity variation at a fine scale. In this paper, taking Changchun, China as a case study, we selected the new generation of high spatial resolution (0.92 m) and multispectral bands NTL image JL1-3B data to evaluate the relationship between NTL intensity and related land features such as the normalized difference vegetation index (NDVI), land use types and point of information (POI) at the parcel level, and combined Luojia 1-01 images for comparative analysis. After screening features by the Gini index, 17 variables were selected to establish the best random forest (RF) regression model for the Luojia 1-01 and JL1-3B data, corresponding to out-of-bag (oob) scores of 0.8304 and 0.9054, respectively. The impact of features on NTL was determined by calculating the features contribution. It was found that JL1-3B data perform better on a finer scale and provide more information. In addition, JL1-3B data are less affected by light overflow effect and saturation, and they could provide more accurate information at smaller parcels. Through the impact analysis of land features on the two kinds of NTL data, it is proven that JL1-3B images can be used to study effectively the relationship between NTL and human activities information. This paper aims to establish a regression model between the radiance of two types of NTL data and land features by RF algorithm, to further excavate the main land features that impact radiance according to the feature contribution, and compare the performance of two types of NTL data in regression. The study is expected to provide a reference to the further application of NTL data such as land feature inversion, artificial surface monitoring and evaluation, geographic information point estimation, information mining, etc., and a more comprehensive cognition of land feature impact to urban social–economic indexes from a unique perspective, which can be used to assist urban planning and related decision-making. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The basic information of the research area, superimposed on the Google Earth image. The red and black regions represent the border of the study area and each part of the JL1-3B data, respectively.</p>
Full article ">Figure 2
<p>Two kinds of nighttime lights (NTL) data in the study area: (<b>a</b>) JL1-3B; (<b>b</b>) Luojia 1-01.</p>
Full article ">Figure 3
<p>Distribution of land features data in parcel level: (<b>a</b>) road networks (Open Street Map, OSM); (<b>b</b>) normalized difference vegetation index (NDVI); (<b>c</b>) land use data; (<b>d</b>) density of point of information (POI).</p>
Full article ">Figure 4
<p>Methodology flowchart. Step 1: data preprocessing; step 2: making parcels datasets; step 3: establishing the random forest (RF) regression model and analyzing the features contribution.</p>
Full article ">Figure 5
<p>The RGB relative spectral response of JL1-3B.</p>
Full article ">Figure 6
<p>Random forest regression procedure.</p>
Full article ">Figure 7
<p>(<b>a</b>): The area corresponding to different land use types. (<b>b</b>): The average radiance of the Luojia 1-01 and JL1-3B data corresponding to different land use types.</p>
Full article ">Figure 8
<p>The features contributions of artificial surface, cultivated land, and NDVI in the RF models for Luojia1-01 and JL1-3B data.</p>
Full article ">Figure 9
<p>The features contributions of road ancillary facilities, enterprises, and food in the RF models for Luojia1-01 and JL1-3B.</p>
Full article ">Figure 10
<p>A sample of one of the parcels located in Nanhu Park superimposed on different types of features and NTL data: (<b>a</b>) POI superimposed on Google map image taken on May 6, 2018. Different colors represent the number of points of different POI types. (<b>b</b>) Land use. (<b>c</b>) JL1-3B. (<b>d</b>) Luojia 1-01. The shaded region represents the border of the parcel generated by OSM.</p>
Full article ">Figure 11
<p>Limitations of JL1-3B NTL data: JL1-3B images (left panel), Google earth map (right panel) of two selected regions, including major building area (<b>a</b>), traffic lanes (<b>b</b>).</p>
Full article ">
84 pages, 20580 KiB  
Review
Sensing Systems for Respiration Monitoring: A Technical Systematic Review
by Erik Vanegas, Raul Igual and Inmaculada Plaza
Sensors 2020, 20(18), 5446; https://doi.org/10.3390/s20185446 - 22 Sep 2020
Cited by 80 | Viewed by 11605
Abstract
Respiratory monitoring is essential in sleep studies, sport training, patient monitoring, or health at work, among other applications. This paper presents a comprehensive systematic review of respiration sensing systems. After several systematic searches in scientific repositories, the 198 most relevant papers in this [...] Read more.
Respiratory monitoring is essential in sleep studies, sport training, patient monitoring, or health at work, among other applications. This paper presents a comprehensive systematic review of respiration sensing systems. After several systematic searches in scientific repositories, the 198 most relevant papers in this field were analyzed in detail. Different items were examined: sensing technique and sensor, respiration parameter, sensor location and size, general system setup, communication protocol, processing station, energy autonomy and power consumption, sensor validation, processing algorithm, performance evaluation, and analysis software. As a result, several trends and the remaining research challenges of respiration sensors were identified. Long-term evaluations and usability tests should be performed. Researchers designed custom experiments to validate the sensing systems, making it difficult to compare results. Therefore, another challenge is to have a common validation framework to fairly compare sensor performance. The implementation of energy-saving strategies, the incorporation of energy harvesting techniques, the calculation of volume parameters of breathing, or the effective integration of respiration sensors into clothing are other remaining research efforts. Addressing these and other challenges outlined in the paper is a required step to obtain a feasible, robust, affordable, and unobtrusive respiration sensing system. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Most common application fields of sensing systems to monitor breathing.</p>
Full article ">Figure 2
<p>Literature search results and selection procedure (<b>top</b>). PRISMA diagram (<b>bottom</b>).</p>
Full article ">Figure 3
<p>Analysis structure.</p>
Full article ">Figure 4
<p>Graphical explanation of the different breathing parameters. Signal (<b>A</b>) could come directly from the ADC (analog-to-digital converter) of the sensing system, although it is also possible that it represents physical respiration magnitudes. This figure shows a general representation that is not contextualized to a specific sensing system. The same goes for signal (<b>B</b>).</p>
Full article ">Figure 5
<p>Most common sensor locations for respiration monitoring. The sensors shown are for contextualization purposes.</p>
Full article ">Figure 6
<p>Distribution of sensing techniques (<b>left</b>) and sensors (<b>right</b>) used in the studies of the wearable category.</p>
Full article ">Figure 7
<p>Distribution of sensing techniques (<b>left</b>) and sensors (<b>right</b>) used in the studies of the environmental category.</p>
Full article ">Figure 8
<p>Number of studies obtaining the different respiratory parameters for the wearable (<b>top</b>) and environmental (<b>bottom</b>) categories.</p>
Full article ">Figure 8 Cont.
<p>Number of studies obtaining the different respiratory parameters for the wearable (<b>top</b>) and environmental (<b>bottom</b>) categories.</p>
Full article ">Figure 9
<p>Distribution of sensor location for the wearable studies.</p>
Full article ">Figure 10
<p>Distribution of sensor location for the environmental studies.</p>
Full article ">Figure 11
<p>Representation of possible setups of respiratory sensing systems. (<b>A</b>) perform data processing on a centralized processing platform and (<b>B</b>) perform data processing near the remote sensing unit.</p>
Full article ">Figure 12
<p>Representation of possible setups of respiratory sensing systems.</p>
Full article ">Figure 13
<p>Schemes of energy harvesting using magnetic induction generation: (<b>A</b>) DC generator activated by chest movements (figure inspired by Reference [<a href="#B135-sensors-20-05446" class="html-bibr">135</a>]), (<b>B</b>) tube with fixed and free magnets moved by airflow (figure inspired by Reference [<a href="#B240-sensors-20-05446" class="html-bibr">240</a>]), and (<b>C</b>) turbine moved by airflow (figure inspired by Reference [<a href="#B241-sensors-20-05446" class="html-bibr">241</a>]).</p>
Full article ">Figure 14
<p>Piezoelectric energy harvesters. Three possible configurations are shown: (<b>A</b>) power generation based on compression or stretching movements associated with breathing (figure inspired by Reference [<a href="#B244-sensors-20-05446" class="html-bibr">244</a>]), (<b>B</b>) energy harvesting based on vibration amplified by a magnet (figure inspired by Reference [<a href="#B243-sensors-20-05446" class="html-bibr">243</a>]), and (<b>C</b>) technique using low speed airflow (figure inspired by Reference [<a href="#B245-sensors-20-05446" class="html-bibr">245</a>]).</p>
Full article ">Figure 15
<p>Setups for triboelectric energy harvesting. Three possible configurations are shown: (<b>A</b>) flat belt-attached setup (figure inspired by Reference [<a href="#B246-sensors-20-05446" class="html-bibr">246</a>]), (<b>B</b>) Z-shaped connector (figure inspired by Reference [<a href="#B77-sensors-20-05446" class="html-bibr">77</a>]), and (<b>C</b>) movable and fixed supports (figure inspired by Reference [<a href="#B247-sensors-20-05446" class="html-bibr">247</a>]).</p>
Full article ">Figure 16
<p>Electrostatic energy harvesting based on the variation of the area of the upper electrode owing to humidity of the exhaled air (figure inspired by Reference [<a href="#B248-sensors-20-05446" class="html-bibr">248</a>]).</p>
Full article ">Figure 17
<p>Schematic of a pyroelectric energy harvester using a mask-mounted breathing prototype (figure inspired by Reference [<a href="#B253-sensors-20-05446" class="html-bibr">253</a>]).</p>
Full article ">Figure 18
<p>Example of a solar-powered system composed of a solar module, a charge regulator and a microcontroller. The voltage regulator receives an input voltage from the solar cell in the range of 0.3 V to 6 V. The charge regulator manages the charge of the battery (at constant voltage and current). The battery is connected in parallel to the internal voltage regulator of the microcontroller of the system.</p>
Full article ">Figure 19
<p>Charge regulator and battery (low capacity, 150 mAh) integrated into the sensing prototype developed by Vanegas et al. [<a href="#B254-sensors-20-05446" class="html-bibr">254</a>], slightly modified. The sensor used in that prototype (a force-sensitive resistor) is included separately for size comparison. Units: cm.</p>
Full article ">Figure 20
<p>Number of studies adopting wired or wireless data transmission in respiration sensing systems.</p>
Full article ">Figure 21
<p>Number of respiratory monitoring studies that considered different types of communication technologies.</p>
Full article ">Figure 22
<p>Number of studies adopting the different processing units.</p>
Full article ">Figure 23
<p>Distribution of battery lives reported in the respiratory monitoring studies.</p>
Full article ">Figure 24
<p>Common positions/activities to validate the breathing sensors (sitting, standing, lying down, walking, running, and sleeping). Chest sensor used as an example.</p>
Full article ">Figure 25
<p>Representation of different validation approaches: (<b>A</b>) use of artificial validation prototypes, (<b>B</b>) validation using a metronome, and (<b>C</b>) validation using a reference device.</p>
Full article ">Figure 26
<p>Flow diagram of a typical validation procedure using artificial prototypes.</p>
Full article ">Figure 27
<p>Flow chart for the validation of a respiration sensor using the methods “metronome as reference” and “validation against a reference device”.</p>
Full article ">Figure 28
<p>Number of studies that adopted the different validation approaches.</p>
Full article ">Figure 29
<p>Peak detection of a sample respiration signal obtained from the public breathing dataset published in Reference [<a href="#B254-sensors-20-05446" class="html-bibr">254</a>]. (<b>A</b>) Peak detection of a noisy signal without filtering. (<b>B</b>) Peak detection imposing a restriction of <span class="html-italic">p</span> surrounding number of samples (in green the peak accepted). (<b>C</b>) Example of a peak accepted (left, green peak) and a peak discarded (right, red peak) when applying the slope restriction. (<b>D</b>) Example of a peak reaching (green) and not reaching (red) the minimum prominence level PP to be considered a valid peak. (<b>E</b>) Example of two peaks (red) not fulfilling the minimum horizontal distance restriction TD. (<b>F</b>) Example of a peak (red) not fulfilling the vertical minimum level restriction and two peaks that surpass level TL (green peaks). (<b>G</b>) Example of two peaks discarded (red) for not differing the imposed tidal volume (TV) level from a detected peak (green).</p>
Full article ">Figure 30
<p>Zero-crossings method exemplified in a real signal obtain from the public breathing dataset of Vanegas et al. [<a href="#B254-sensors-20-05446" class="html-bibr">254</a>]. (<b>A</b>) Effect of the presence of outliers in the signals in the calculation of the “zero level”. (<b>B</b>) Example of a signal with trends and results of applying a de-trend processing. (<b>C</b>) Example of using different “zero levels” in a signal with trends. (<b>D</b>) Example of a noisy signal with several zero-crossings detected when only one of them (green) should have been considered.</p>
Full article ">Figure 31
<p>Frequency analysis of sample real respiratory signals obtained from a public dataset [<a href="#B254-sensors-20-05446" class="html-bibr">254</a>]. (<b>A</b>) Effect of the time window (4 s, 8 s, and 16 s) on the frequency calculation. The true frequency is 0.33 Hz (3 s period) and the sampling frequency is 50 Hz. Results for the 16-s time window (<a href="#sensors-20-05446-t0A3" class="html-table">Table A3</a>, 0.3125–0.344 Hz) are closer to the true value. (<b>B</b>) Effect of noise on frequency detection (noisy signal and its spectrum -B.1-, filtered signal and its spectrum -B.2-). (<b>C</b>) Example of a breathing signal with low frequency fluctuations. (<b>D</b>) Example of a breathing signal with fluctuations due to movements of the subject and its spectrum.</p>
Full article ">Figure 32
<p>Wavelet transform. (<b>A</b>) 2D representation of the continuous wavelet transform (CWT) (<b>right</b>) of an example signal (<b>left</b>) taken from a dataset of real respiration signals [<a href="#B254-sensors-20-05446" class="html-bibr">254</a>] (RR of 20 bpm −0.33 Hz-, and sampling frequency of 50 Hz). (<b>B</b>) Multiresolution analysis (MRA) decomposition process (<b>top</b>). The lower part shows an example of the MRA analysis applied to the signal above ((<b>A</b>), <b>left</b>). Six-level decomposition was applied using the ‘Haar’ wavelet. Two detail levels and the sixth approximation level are represented. The spectrum of the approximation coefficients (level 6) was obtained.</p>
Full article ">Figure 33
<p>Kalman filter algorithm for the fusion of different respiration sensors.</p>
Full article ">Figure 34
<p>Number of studies using different processing algorithms for the wearable (<b>left</b>) and environmental (<b>right</b>) categories.</p>
Full article ">Figure 35
<p>Number of studies using the different figures of merit to determine sensor performance for the wearable (<b>top</b>) and environmental (<b>bottom</b>) categories.</p>
Full article ">Figure 36
<p>Number of studies using the different processing tools for the wearable (<b>top</b>) and environmental (<b>bottom</b>) categories.</p>
Full article ">Figure 36 Cont.
<p>Number of studies using the different processing tools for the wearable (<b>top</b>) and environmental (<b>bottom</b>) categories.</p>
Full article ">
19 pages, 7369 KiB  
Article
Passive Extraction of Signal Feature Using a Rectifier with a Mechanically Switched Inductor for Low Power Acoustic Event Detection
by Marko Gazivoda, Dinko Oletić, Carlo Trigona and Vedran Bilas
Sensors 2020, 20(18), 5445; https://doi.org/10.3390/s20185445 - 22 Sep 2020
Cited by 2 | Viewed by 3141
Abstract
Analog hardware used for signal envelope extraction in low-power interfaces for acoustic event detection, owing to its low complexity and power consumption, suffers from low sensitivity and performs poorly under low signal to noise ratios (SNR) found in undersea environments. To overcome those [...] Read more.
Analog hardware used for signal envelope extraction in low-power interfaces for acoustic event detection, owing to its low complexity and power consumption, suffers from low sensitivity and performs poorly under low signal to noise ratios (SNR) found in undersea environments. To overcome those problems, in this paper, we propose a novel passive electromechanical solution for the signal feature extraction in low frequency acoustic range (200–1000 Hz), in the form of a piezoelectric vibration transducer, and a rectifier with a mechanically switched inductor. A simulation study of the novel solution is presented, and a proof-of-concept device is developed and experimentally characterized. We demonstrate its applicability and show the advantages of the passive electromechanical device in comparison to the active electrical solution in terms of operation with lower input signals (<20 mV compared to 40 mV), and higher robustness in low SNR conditions (output voltage loss for −10 dB ≤ SNR < 40 dB of 1 mV, compared to 10 mV). In addition to the signal processing performance improvements, compared to our previous work, the utilization of the presented novel passive feature extractor would also decrease power consumption of a detector’s channel by over 76%, enabling life-time extension and/or increased quality of detection with larger number of channels. To the best of our knowledge, this is the first solution presented in the literature that demonstrates the possibility of using a passive electromechanical feature extractor in a low-power analog wake-up event detector interface. Full article
(This article belongs to the Special Issue Low Power and Energy Efficient Sensing Applications)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) An example of a spectrogram of a signal of interest. <span class="html-italic">T</span><sub>1</sub>, <span class="html-italic">T</span><sub>2</sub>, <span class="html-italic">T</span><sub>3</sub>, represent durations, <span class="html-italic">f<sub>b</sub></span><sub>1</sub>, <span class="html-italic">f<sub>b</sub></span><sub>2</sub>, <span class="html-italic">f<sub>b</sub></span><sub>3</sub> frequency bands and <span class="html-italic">E</span><sub>1</sub>, <span class="html-italic">E</span><sub>2</sub>, <span class="html-italic">E</span><sub>3</sub> energy (feature) of states within the time-frequency pattern. (<b>b</b>) Block diagram of an analog multichannel pattern recognition-based event detection system. The marked part of the system is the active electrical feature extractor (in this case the feature is signal envelope in a set frequency band). (<b>c</b>) A schematic of the active electrical feature extractor (AE FE). (<b>d</b>) Characteristic signals of the detector—at transducer (<span class="html-italic">V<sub>in</sub></span>), after filter (<span class="html-italic">V<sub>filt</sub></span>) and at output (<span class="html-italic">V<sub>out</sub></span>). The values of interest in the feature extractor’s output voltage are marked—headroom voltage, rise and fall time.</p>
Full article ">Figure 1 Cont.
<p>(<b>a</b>) An example of a spectrogram of a signal of interest. <span class="html-italic">T</span><sub>1</sub>, <span class="html-italic">T</span><sub>2</sub>, <span class="html-italic">T</span><sub>3</sub>, represent durations, <span class="html-italic">f<sub>b</sub></span><sub>1</sub>, <span class="html-italic">f<sub>b</sub></span><sub>2</sub>, <span class="html-italic">f<sub>b</sub></span><sub>3</sub> frequency bands and <span class="html-italic">E</span><sub>1</sub>, <span class="html-italic">E</span><sub>2</sub>, <span class="html-italic">E</span><sub>3</sub> energy (feature) of states within the time-frequency pattern. (<b>b</b>) Block diagram of an analog multichannel pattern recognition-based event detection system. The marked part of the system is the active electrical feature extractor (in this case the feature is signal envelope in a set frequency band). (<b>c</b>) A schematic of the active electrical feature extractor (AE FE). (<b>d</b>) Characteristic signals of the detector—at transducer (<span class="html-italic">V<sub>in</sub></span>), after filter (<span class="html-italic">V<sub>filt</sub></span>) and at output (<span class="html-italic">V<sub>out</sub></span>). The values of interest in the feature extractor’s output voltage are marked—headroom voltage, rise and fall time.</p>
Full article ">Figure 2
<p>Block diagram of the passive electromechanical feature extractor (PEM FE). <span class="html-italic">S</span>—switch, <span class="html-italic">m</span>—magnet at the beam end, <span class="html-italic">m<sub>f</sub></span>—fixed magnet, <span class="html-italic">L</span>—inductor, <span class="html-italic">C<sub>r</sub></span><sub>1</sub>, <span class="html-italic">C<sub>r</sub></span><sub>2</sub>—rectifier capacitors, <span class="html-italic">D</span><sub>1</sub>, <span class="html-italic">D</span><sub>2</sub>—rectifier diodes. <span class="html-italic">V<sub>pzt</sub></span>(<span class="html-italic">t</span>)—voltage generated at piezoelectric transducer, <span class="html-italic">V<sub>ind</sub></span>(<span class="html-italic">t</span>)—voltage induced at the inductor <span class="html-italic">L</span>, <span class="html-italic">V<sub>out</sub></span>(<span class="html-italic">t</span>)—extractor output voltage.</p>
Full article ">Figure 3
<p>Schematic of the proposed passive electromechanical feature extractor (PEM FE). Red—current <span class="html-italic">I<sub>closed</sub></span>(<span class="html-italic">t</span>)—passing through the PEM FE while the switch <span class="html-italic">S</span> is closed. Blue—current <span class="html-italic">I<sub>open</sub></span>(<span class="html-italic">t</span>) passing through the PEM FE when the switch <span class="html-italic">S</span> opens. <span class="html-italic">L</span>—inductor and <span class="html-italic">C<sub>r</sub></span><sub>1</sub>, <span class="html-italic">C<sub>r</sub></span><sub>2</sub>—rectifier capacitors, <span class="html-italic">D</span><sub>1</sub>, <span class="html-italic">D</span><sub>2</sub>—rectifier diodes. <span class="html-italic">R<sub>P</sub></span> and <span class="html-italic">C<sub>P</sub></span>—parasitic resistance and capacitance of piezoelectric transducer, respectively. <span class="html-italic">V<sub>pzt</sub></span>(<span class="html-italic">t</span>)—voltage generated at piezoelectric transducer, <span class="html-italic">V<sub>ind</sub></span>(<span class="html-italic">t</span>)—voltage induced at the inductor, <span class="html-italic">V<sub>out</sub></span>(<span class="html-italic">t</span>)—extractor output voltage.</p>
Full article ">Figure 4
<p>Simplified simulation model of the passive electromechanical feature extractor (PEM FE). It consists of simulation models of the piezoelectric transducer, the mechanical switch and the electric circuit.</p>
Full article ">Figure 5
<p>Frequency selectivity of passive electromechanical feature extractor (PEM FE) obtained by change of its physical dimensions (beam length (<span class="html-italic">l</span>), beam and magnet mass (<span class="html-italic">m</span>), magnet position (<span class="html-italic">δ</span>)). The output voltage was normalized with regards to maximal value. Rectifier capacitances <span class="html-italic">C<sub>r</sub></span><sub>1−2</sub> = 33 nF. Inductor <span class="html-italic">L</span> = 1 mH. Input vibration frequency changed by 5 Hz from 270 Hz to 350 Hz, from 400 Hz to 495 Hz and from 880 Hz to 1060 Hz for each PEM FE setting, respectively. Input vibration energy set to 1.6 nJ.</p>
Full article ">Figure 6
<p>Extractor output voltage (<span class="html-italic">V<sub>out</sub></span>) with input signal energy, <span class="html-italic">E<sub>in</sub></span>. Rectifier capacitance <span class="html-italic">C<sub>r</sub></span><sub>1−2</sub>= 33 nF. Inductor <span class="html-italic">L =</span> 1 mH. Input vibration energy was set to 0.1 nJ, 0.4 nJ, 0.9 nJ and 1.6 nJ, respectively. The black line with pluses represents the simulation results, the black dashed line the linear approximation and the red lines represent the error margins (explained in more detail in the experimental part) for 325 Hz input vibration frequency. The green, blue and purple dashed lines represent linear approximations for 300 Hz, 350 Hz and 375 Hz input vibration frequency, respectively.</p>
Full article ">Figure 7
<p>Waveform of the output voltage, <span class="html-italic">V<sub>out</sub></span>, for several values of rectifier capacitances: 33 nF, 100 nF, 470 nF, 1 μF. Input vibrations at 325 Hz generate 50 mV peak-to-peak at the piezoelectric transducer. Inductor <span class="html-italic">L</span> = 1 mH. (<b>a</b>) At beginning of capacitor charging, (<b>b</b>) In stationary conditions.</p>
Full article ">Figure 8
<p>(<b>a</b>) A photograph of the measurement setup. (1) Keysight 33500B waveform generator, (2) PEM FE,(3) Smart Material Energy Harvesting Kit 1.2. shaker, (4) NI USB-6211 data acquisition card. (<b>b</b>) Physical realization of the PEM FE (without the rectifier). (1) Piezoelectric transducer, (2) stopper, (3) cantilever beam, (4) fixed magnet (right) and adjustable magnet (left). The mass of the beam (<span class="html-italic">m</span>) is approximated by the mass of the magnet at its end. <span class="html-italic">δ</span> is the magnet position adjustment parameter. Length of the beam is marked by <span class="html-italic">l</span>.</p>
Full article ">Figure 9
<p>Relation of extractor output voltage (<span class="html-italic">V<sub>out</sub></span>) and input signal energy, <span class="html-italic">E<sub>in</sub></span>. Rectifier capacitances <span class="html-italic">C<sub>r</sub></span><sub>1−2</sub> = 33 nF. Inductor <span class="html-italic">L</span> = 100 mH. Black pluses—measurement data, blue line—linear interpolation, red lines—error margins.</p>
Full article ">Figure 10
<p>Frequency selectivity of passive electromechanical feature extractor (PEM FE) obtained by change of physical dimensions (beam length (<span class="html-italic">l</span>), beam and magnet mass (<span class="html-italic">m</span>), magnet position (<span class="html-italic">δ</span>)). The output voltage was normalized with regards to maximal value. Input vibrations generate 50 mV peak-to-peak at piezoelectric transducer, input vibration frequency changed by 10 Hz from 150 Hz to 210 Hz and by 5 Hz from 290 Hz to 330 Hz for each developed PEM FE, respectively. Rectifier capacitance <span class="html-italic">C</span><span class="html-italic"><sub>r</sub></span><sub>1−2</sub> = 33 nF, inductor <span class="html-italic">L</span> = 100 mH.</p>
Full article ">Figure 11
<p>Waveform of the output voltage for different values of rectifier capacitances <span class="html-italic">C<sub>r</sub></span><sub>1−2</sub>. Voltage generated at the piezoelectric transducer, <span class="html-italic">V<sub>pzt</sub></span>(<span class="html-italic">t</span>), is 100 mV peak-to-peak, frequency 315 Hz. Input vibrations are gated, 0.75 s of signal followed by 4 s of pause. Inductor <span class="html-italic">L</span> = 100 mH.</p>
Full article ">Figure 12
<p>A photograph of the measurement setup for active electrical feature extractor [<a href="#B25-sensors-20-05445" class="html-bibr">25</a>,<a href="#B31-sensors-20-05445" class="html-bibr">31</a>] measurements. (1) Keysight 33500B waveform generator, (2) active electrical feature extractor, (3) National Instruments (NI) USB-6211 data acquisition card, (4) Rigol DP832 power supply.</p>
Full article ">Figure 13
<p>(<b>a</b>) Waveform of the synthetic input signal, <span class="html-italic">V<sub>in</sub></span>, 3 s of 180 Hz sinus, followed by 5 s pause, 0 dB signal to noise ratio (SNR). The voltage shown was normalized with regards to maximal value. (<b>b</b>) Spectrogram of the synthetic input signal.</p>
Full article ">Figure 14
<p>(<b>a</b>) Waveform of the prerecorded input signal, <span class="html-italic">V<sub>in</sub></span>, with a duration of approximately 3 s, followed by around 3 s of pause. The voltage shown was normalized with regards to maximal value. (<b>b</b>) Spectrogram of the prerecorded input signal.</p>
Full article ">Figure 15
<p>Comparison of outputs of a passive electromechanical feature extractor (PEM FE) and an active electrical feature extractor (AE FE). Rectifier capacitances for AE FE <span class="html-italic">C<sub>r</sub></span><sub>1−2</sub> = 22 nF, for PEM FE <span class="html-italic">C<sub>r</sub></span><sub>1−2</sub> = 1 µF. PEM FE inductor <span class="html-italic">L</span> = 100 mH. (<b>a</b>) Synthetic input signals, 3 s of sinus, 180 Hz, 5 s of pause, filter and piezoelectric transducer output, <span class="html-italic">V<sub>filt</sub></span>, <span class="html-italic">V<sub>pzt</sub></span>—10–70 mV peak-to-peak, (<b>b</b>) Prerecorded speedboat signal, 3 s of signal, 3 s of pause, filter and piezoelectric transducer output, <span class="html-italic">V<sub>filt</sub></span>, <span class="html-italic">V<sub>pzt</sub></span>—10–70 mV peak-to-peak.</p>
Full article ">
21 pages, 5370 KiB  
Article
Development of a Smart Ball to Evaluate Locomotor Performance: Application in Adolescents with Intellectual Disabilities
by Wann-Yun Shieh, Yan-Ying Ju, Yu-Chun Yu, Steven Pandelaki and Hsin-Yi Kathy Cheng
Sensors 2020, 20(18), 5444; https://doi.org/10.3390/s20185444 - 22 Sep 2020
Cited by 2 | Viewed by 3160
Abstract
Adolescents with intellectual disabilities display maladaptive behaviors in activities of daily living because of physical abnormalities or neurological disorders. These adolescents typically exhibit poor locomotor performance and low cognitive abilities in moving the body to perform tasks (e.g., throwing an object or catching [...] Read more.
Adolescents with intellectual disabilities display maladaptive behaviors in activities of daily living because of physical abnormalities or neurological disorders. These adolescents typically exhibit poor locomotor performance and low cognitive abilities in moving the body to perform tasks (e.g., throwing an object or catching an object) smoothly, quickly, and gracefully when compared with typically developing adolescents. Measuring movement time and distance alone does not provide a complete picture of the atypical performance. In this study, a smart ball with an inertial sensor embedded inside was proposed to measure the locomotor performance of adolescents with intellectual disabilities. Four ball games were designed for use with this smart ball: two lower limb games (dribbling along a straight line and a zigzag line) and two upper limb games (picking up a ball and throwing-and-catching). The results of 25 adolescents with intellectual disabilities (aged 18.36 ± 2.46 years) were compared with the results of 25 typically developing adolescents (aged 18.36 ± 0.49 years) in the four tests. Adolescents with intellectual disabilities exhibited considerable motor-performance differences from typically developing adolescents in terms of moving speed, hand–eye coordination, and object control in all tests. Full article
(This article belongs to the Collection Sensors for Gait, Human Movement Analysis, and Health Monitoring)
Show Figures

Figure 1

Figure 1
<p>Illustration of the circuit design into the core of the ball.</p>
Full article ">Figure 2
<p>The location of the sensors: (<b>a</b>) the smart ball sensors, including an accelerometer (<span class="html-italic">ball_acc</span>) and a gyroscope (<span class="html-italic">ball_gyr</span>) at the core of the ball, (<b>b</b>) the wearable sensors, including six trackers at the right or left outer sides of the arms, thighs, calves, and one at the lower back.</p>
Full article ">Figure 3
<p>Illustrations of the tests: (<b>a</b>) picking up the ball at the spot it was placed, which was at three different heights, (<b>b</b>) throwing-and-catching the ball, where “<span class="html-italic">h</span>” denoted the height the ball should reach, (<b>c</b>) dribbling the ball along a straight line, where the dotted arrow showed the dribbling direction, and (<b>d</b>) dribbling the ball along a zigzag line with five obstacles staggered in a zigzag order.</p>
Full article ">Figure 4
<p>Illustration of the signal from <span class="html-italic">ball_acc</span>. The reaction time (<math display="inline"> <semantics> <mrow> <msub> <mi>T</mi> <mrow> <mi>r</mi> <mi>e</mi> <mi>a</mi> <mi>c</mi> <mi>t</mi> <mi>i</mi> <mi>o</mi> <mi>n</mi> </mrow> </msub> </mrow> </semantics> </math>) was defined as the interval from the beep sound (denoteds as “x”) to the time at which the ball started moving.</p>
Full article ">Figure 5
<p>Illustrations of signals from <span class="html-italic">ball_acc</span>: (<b>a</b>) throw and catch successfully, where the signal was cut off (i.e., period “<span class="html-italic">t<sub>x</sub></span>”) due to the weightless state as the sensor stays in the air, (<b>b</b>) throw without catching successfully, where “<span class="html-italic">t<sub>y</sub></span>” was the first impact of the ball to the ground and was followed by another weightless period with a series of pulses due to the bounces of the ball (i.e., period “<span class="html-italic">t<sub>z</sub></span>”).</p>
Full article ">Figure 6
<p>Illustration of the signal from <span class="html-italic">ball_gyr</span> during the straight-line dribbling. The total rotation angle of the ball can be obtained by accumulating the sensor signal during the rotating period.</p>
Full article ">Figure 7
<p>Illustration of the swinging angle measured from the <span class="html-italic">right_calf</span> on the sagittal plane. The angle from a valley point (blue circle) to a peak point (red cross) represented the degree through which the participant swung the calf once.</p>
Full article ">Figure 8
<p>Illustration of the trunk tilt angle measured from <span class="html-italic">back_trunk</span> on the sagittal plane. The angle from a valley point (blue circle) to a peak point (red cross) represented the tilt angle through which the participant swung the trunk to maintain balance.</p>
Full article ">Figure 9
<p>Illustrations of the validation tests: (<b>a</b>) for the Xsens tracker and (<b>b</b>) for the smart ball. All of the tests were captured under eight surrounded Vicon cameras.</p>
Full article ">Figure 10
<p>Results of <math display="inline"> <semantics> <mrow> <msub> <mi>T</mi> <mrow> <mi>r</mi> <mi>e</mi> <mi>a</mi> <mi>c</mi> <mi>t</mi> <mi>i</mi> <mi>o</mi> <mi>n</mi> </mrow> </msub> </mrow> </semantics> </math>. The participants with an intellectual disability (ID) had a slower reaction time only in the picking-up-the-ball test.</p>
Full article ">Figure 11
<p>Results of <math display="inline"> <semantics> <mrow> <msub> <mi>R</mi> <mrow> <mi>t</mi> <mi>h</mi> <mi>r</mi> <mi>o</mi> <mi>w</mi> <mi>i</mi> <mi>n</mi> <mi>g</mi> <mo>−</mo> <mi>a</mi> <mi>n</mi> <mi>d</mi> <mo>−</mo> <mi>c</mi> <mi>a</mi> <mi>t</mi> <mi>c</mi> <mi>h</mi> <mi>i</mi> <mi>n</mi> <mi>g</mi> </mrow> </msub> </mrow> </semantics> </math>. The paticipants with ID exhibited clearly lower throwing-and-catching rates than the typically developing (TD) participants.</p>
Full article ">Figure 12
<p>Results of <math display="inline"> <semantics> <mrow> <msub> <mi>D</mi> <mrow> <mi>d</mi> <mi>r</mi> <mi>i</mi> <mi>b</mi> <mi>b</mi> <mi>l</mi> <mi>i</mi> <mi>n</mi> <mi>g</mi> <mo>_</mo> <mi>s</mi> <mi>t</mi> <mi>r</mi> <mi>a</mi> <mi>i</mi> <mi>g</mi> <mi>h</mi> <mi>t</mi> </mrow> </msub> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msub> <mi>D</mi> <mrow> <mi>d</mi> <mi>r</mi> <mi>i</mi> <mi>b</mi> <mi>b</mi> <mi>l</mi> <mi>i</mi> <mi>n</mi> <mi>g</mi> <mo>_</mo> <mi>z</mi> <mi>i</mi> <mi>g</mi> <mi>z</mi> <mi>a</mi> <mi>g</mi> </mrow> </msub> </mrow> </semantics> </math>. The participants with ID exhibited longer dribbling distances than the TD participants in both tests.</p>
Full article ">Figure 13
<p>Results of <math display="inline"> <semantics> <mrow> <msub> <mi>T</mi> <mrow> <mi>d</mi> <mi>r</mi> <mi>i</mi> <mi>b</mi> <mi>b</mi> <mi>l</mi> <mi>i</mi> <mi>n</mi> <mi>g</mi> <mo>_</mo> <mi>s</mi> <mi>t</mi> <mi>r</mi> <mi>a</mi> <mi>i</mi> <mi>g</mi> <mi>h</mi> <mi>t</mi> </mrow> </msub> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msub> <mi>T</mi> <mrow> <mi>d</mi> <mi>r</mi> <mi>i</mi> <mi>b</mi> <mi>b</mi> <mi>l</mi> <mi>i</mi> <mi>n</mi> <mi>g</mi> <mo>_</mo> <mi>z</mi> <mi>i</mi> <mi>g</mi> <mi>z</mi> <mi>a</mi> <mi>g</mi> </mrow> </msub> </mrow> </semantics> </math>. The participants with ID displayed longer dribbling times than the TD participants in both tests.</p>
Full article ">Figure 14
<p>Results of the limb swinging frequency: <math display="inline"> <semantics> <mrow> <mfenced> <mstyle mathvariant="bold" mathsize="normal"> <mi>a</mi> </mstyle> </mfenced> <mo> </mo> <msub> <mi>F</mi> <mrow> <mi>s</mi> <mi>w</mi> <mi>i</mi> <mi>n</mi> <mi>g</mi> <mo>_</mo> <mi>a</mi> <mi>r</mi> <mi>m</mi> </mrow> </msub> <mo>,</mo> <mfenced> <mstyle mathvariant="bold" mathsize="normal"> <mi>b</mi> </mstyle> </mfenced> <mo> </mo> <msub> <mi>F</mi> <mrow> <mi>s</mi> <mi>w</mi> <mi>i</mi> <mi>n</mi> <mi>g</mi> <mo>_</mo> <mi>t</mi> <mi>h</mi> <mi>i</mi> <mi>g</mi> <mi>h</mi> </mrow> </msub> <mo>,</mo> <mrow> <mo> </mo> <mi>and</mi> <mo> </mo> </mrow> <mfenced> <mstyle mathvariant="bold" mathsize="normal"> <mi>c</mi> </mstyle> </mfenced> <mo> </mo> <msub> <mi>F</mi> <mrow> <mi>s</mi> <mi>w</mi> <mi>i</mi> <mi>n</mi> <mi>g</mi> <mo>_</mo> <mi>c</mi> <mi>a</mi> <mi>l</mi> <mi>f</mi> </mrow> </msub> <mo>.</mo> </mrow> </semantics> </math> The participants with ID displayed overall lower limb swing frequencies in both tests.</p>
Full article ">Figure 15
<p>Results of the limb swinging angle: <math display="inline"> <semantics> <mrow> <mfenced> <mstyle mathvariant="bold" mathsize="normal"> <mi>a</mi> </mstyle> </mfenced> <mo> </mo> <msub> <mi>A</mi> <mrow> <mi>s</mi> <mi>w</mi> <mi>i</mi> <mi>n</mi> <mi>g</mi> <mo>_</mo> <mi>a</mi> <mi>r</mi> <mi>m</mi> </mrow> </msub> <mo>,</mo> <mfenced> <mstyle mathvariant="bold" mathsize="normal"> <mi>b</mi> </mstyle> </mfenced> <mo> </mo> <msub> <mi>A</mi> <mrow> <mi>s</mi> <mi>w</mi> <mi>i</mi> <mi>n</mi> <mi>g</mi> <mo>_</mo> <mi>t</mi> <mi>h</mi> <mi>i</mi> <mi>g</mi> <mi>h</mi> </mrow> </msub> <mo>,</mo> <mrow> <mi>and</mi> <mo> </mo> </mrow> <mfenced> <mstyle mathvariant="bold" mathsize="normal"> <mi>c</mi> </mstyle> </mfenced> <mo> </mo> <msub> <mi>A</mi> <mrow> <mi>s</mi> <mi>w</mi> <mi>i</mi> <mi>n</mi> <mi>g</mi> <mo>_</mo> <mi>c</mi> <mi>a</mi> <mi>l</mi> <mi>f</mi> </mrow> </msub> </mrow> </semantics> </math>. The participants with ID showed overall smaller limb swing angles in both tests, where the difference of the calf angles reached more than 10°.</p>
Full article ">Figure 16
<p>Results of the trunk tilt angle <math display="inline"> <semantics> <mrow> <msub> <mi>A</mi> <mrow> <mi>t</mi> <mi>r</mi> <mi>u</mi> <mi>n</mi> <mi>k</mi> </mrow> </msub> </mrow> </semantics> </math>. The participants with ID displayed larger tilt angles in the straight-line test but lower angles in the zigzag line test.</p>
Full article ">
22 pages, 6231 KiB  
Article
End-to-End Automated Lane-Change Maneuvering Considering Driving Style Using a Deep Deterministic Policy Gradient Algorithm
by Hongyu Hu, Ziyang Lu, Qi Wang and Chengyuan Zheng
Sensors 2020, 20(18), 5443; https://doi.org/10.3390/s20185443 - 22 Sep 2020
Cited by 23 | Viewed by 4185
Abstract
Changing lanes while driving requires coordinating the lateral and longitudinal controls of a vehicle, considering its running state and the surrounding environment. Although the existing rule-based automated lane-changing method is simple, it is unsuitable for unpredictable scenarios encountered in practice. Therefore, using a [...] Read more.
Changing lanes while driving requires coordinating the lateral and longitudinal controls of a vehicle, considering its running state and the surrounding environment. Although the existing rule-based automated lane-changing method is simple, it is unsuitable for unpredictable scenarios encountered in practice. Therefore, using a deep deterministic policy gradient (DDPG) algorithm, we propose an end-to-end method for automated lane changing based on lidar data. The distance state information of the lane boundary and the surrounding vehicles obtained by the agent in a simulation environment is denoted as the state space for an automated lane-change problem based on reinforcement learning. The steering wheel angle and longitudinal acceleration are used as the action space, and both the state and action spaces are continuous. In terms of the reward function, avoiding collision and setting different expected lane-changing distances that represent different driving styles are considered for security, and the angular velocity of the steering wheel and jerk are considered for comfort. The minimum speed limit for lane changing and the control of the agent for a quick lane change are considered for efficiency. For a one-way two-lane road, a visual simulation environment scene is constructed using Pyglet. By comparing the lane-changing process tracks of two driving styles in a simplified traffic flow scene, we study the influence of driving style on the lane-changing process and lane-changing time. Through the training and adjustment of the combined lateral and longitudinal control of autonomous vehicles with different driving styles in complex traffic scenes, the vehicles could complete a series of driving tasks while considering driving-style differences. The experimental results show that autonomous vehicles can reflect the differences in the driving styles at the time of lane change at the same speed. Under the combined lateral and longitudinal control, the autonomous vehicles exhibit good robustness to different speeds and traffic density in different road sections. Thus, autonomous vehicles trained using the proposed method can learn an automated lane-changing policy while considering safety, comfort, and efficiency. Full article
Show Figures

Figure 1

Figure 1
<p>Framework of automated lane-changing based on the deep deterministic policy gradient (DDPG) algorithm.</p>
Full article ">Figure 2
<p>Basic framework of reinforcement learning.</p>
Full article ">Figure 3
<p>Neural network framework.</p>
Full article ">Figure 4
<p>Visual result of simulation environment.</p>
Full article ">Figure 5
<p>State space of automated lane change.</p>
Full article ">Figure 6
<p>Rewards of model training: (<b>a</b>) total average reward, (<b>b</b>) average security reward, (<b>c</b>) average comfort reward, (<b>d</b>) average efficiency reward.</p>
Full article ">Figure 7
<p>Tracks of lane change with different driving styles in the same scene: (<b>a</b>) aggressive driving style, (<b>b</b>) conservative driving style.</p>
Full article ">Figure 8
<p>Single-episode rewards at different speeds.</p>
Full article ">Figure 9
<p>Angular velocity of steering wheel.</p>
Full article ">Figure 10
<p>Rewards of different driving styles: (<b>a</b>) total average reward, (<b>b</b>) average efficiency reward, and (<b>c</b>) average comfort reward.</p>
Full article ">Figure 11
<p>Success and collision rates of different driving styles: (<b>a</b>) collision rate, (<b>b</b>) success rate.</p>
Full article ">Figure 12
<p>Single-episode rewards of agents in different scenarios: (<b>a</b>) different initial speeds, and (<b>b</b>) different average speeds in the road section.</p>
Full article ">Figure 13
<p>Comparative results with and without considering comfort reward: (<b>a</b>) angular velocity of steering wheel, and (<b>b</b>) jerk.</p>
Full article ">
19 pages, 4771 KiB  
Article
Canadian Biomass Burning Aerosol Properties Modification during a Long-Ranged Event on August 2018
by Christina-Anna Papanikolaou, Elina Giannakaki, Alexandros Papayannis, Maria Mylonaki and Ourania Soupiona
Sensors 2020, 20(18), 5442; https://doi.org/10.3390/s20185442 - 22 Sep 2020
Cited by 8 | Viewed by 3566
Abstract
The aim of this paper is to study the spatio-temporal evolution of a long-lasting Canadian biomass burning event that affected Europe in August 2018. The event produced biomass burning aerosol layers which were observed during their transport from Canada to Europe from the [...] Read more.
The aim of this paper is to study the spatio-temporal evolution of a long-lasting Canadian biomass burning event that affected Europe in August 2018. The event produced biomass burning aerosol layers which were observed during their transport from Canada to Europe from the 16 to the 26 August 2018 using active remote sensing data from the space-borne system Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO). The total number of aerosol layers detected was 745 of which 42% were identified as pure biomass burning. The remaining 58% were attributed to smoke mixed with: polluted dust (34%), clean continental (10%), polluted continental (5%), desert dust (6%) or marine aerosols (3%). In this study, smoke layers, pure and mixed ones, were observed by the CALIPSO satellite from 0.8 and up to 9.6 km height above mean sea level (amsl.). The mean altitude of these layers was found between 2.1 and 5.2 km amsl. The Ångström exponent, relevant to the aerosol backscatter coefficient (532/1064 nm), ranged between 0.9 and 1.5, indicating aerosols of different sizes. The mean linear particle depolarization ratio at 532 nm for pure biomass burning aerosols was found equal to 0.05 ± 0.04, indicating near spherical aerosols. We also observed that, in case of no aerosol mixing, the sphericity of pure smoke aerosols does not change during the air mass transportation (0.05–0.06). On the contrary, when the smoke is mixed with dessert dust the mean linear particle depolarization ratio may reach values up to 0.20 ± 0.04, especially close to the African continent (Region 4). Full article
(This article belongs to the Special Issue Lidar Remote Sensing of Aerosols Application)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Time Averaged Map of Combined Dark Target and Deep Blue AOD at 550 nm for land and ocean: Mean daily 1° [MODIS-Aqua MYD08_D3 v6.1] from 16 August 2018 and for a 10 day period. (<b>b</b>) The 10 day forward HYSPLIT trajectories starting on 16 August and ending on 10 August. The red dots correspond to active fires observed in BC, Canada, by MODIS with confidence greater than 80%. Magenta and green lines correspond to nighttime and daytime CALIPSO orbits, respectively. Red, yellow, purple and cyan boxes correspond to the four subregions (R1–R4) of the smoke motion that will be further analyzed in <a href="#sec3dot2-sensors-20-05442" class="html-sec">Section 3.2</a>.</p>
Full article ">Figure 2
<p>(<b>a</b>) 10-day forward ensemble air mass trajectories starting on 16 August (11:00 UTC), as provided by HYSPLIT. Different color-lines of the forward trajectories correspond to trajectories for all-possible offsets in longitude, latitude and altitude according to the ensemble analysis). (<b>b</b>) 10-day-forward ensemble air mass trajectories (foreground) over plotted along with selected CALIPSO curtains.</p>
Full article ">Figure 3
<p>Vertically resolved aerosol optical properties: (<b>a</b>) β at 532 nm, (<b>b</b>) β-related AE 532/1064 nm, (<b>c</b>) LPDR at 532 nm and (<b>d</b>) aerosol typing according to the CALIPSO algorithm (M: marine, D: Dust, PC: Polluted Continental, CC: Clean Continental, PD: Polluted Dust, S: Smoke), as retrieved from the nighttime CALIOP orbit on 20 August. The coordinates used to derive these properties were 42.5° N and 73.8° W. Green shadowed lines correspond to the standard deviation for each aerosol property.</p>
Full article ">Figure 4
<p>(<b>a</b>) CALIPSO total attenuated backscatter coefficient at 532 nm and (<b>b</b>) aerosol subtypes versus altitude, latitude and longitude for nighttime and daytime orbits (16–26 August 2018).</p>
Full article ">Figure 5
<p>Percentages of aerosol layers mixing types for the total event, types found in percentages less than 3%, are not presented with numbers in the figure.</p>
Full article ">Figure 6
<p>Aerosol layers mixtures according to (<b>a</b>) their altitude (amsl.), (<b>b</b>) β at 532 nm, (<b>c</b>) LPDR at 532 nm, (<b>d</b>) AE 532/1064 nm, related to β, for the total event (S: pure smoke layers, PD: smoke mixed with polluted dust layers, CC: smoke mixed with clean continental layers, PC: smoke mixed with polluted continental layers, D: smoke mixed with dust layers and M: smoke mixed with marine layers).</p>
Full article ">Figure 7
<p>Percentages of the aerosol layers mixing types for the subregions (<b>R1</b>–<b>R4</b>). Aerosol types found in percentages less than 3%, are not presented with numbers in the figure.</p>
Full article ">Figure 8
<p>Aerosol layers mixtures according to (<b>a</b>) their altitude (amsl.), (<b>b</b>) β at 532 nm, (<b>c</b>) LPDR at 532 nm, (<b>d</b>) AΕ related to β (532/1064 nm). R1–R4 (left to right) correspond to the four subregions (S: pure smoke layers, PD: smoke mixed with polluted dust layers, CC: smoke mixed with clean continental layers, PC: smoke mixed with polluted continental layers, D: smoke mixed with dust layers and M: smoke mixed with marine layers).</p>
Full article ">Figure 9
<p>Pure smoke aerosol layers according to (<b>a</b>) their altitude (amsl.) and relevant optical properties: (<b>b</b>) β at 532 nm, (<b>c</b>) LPDR at 532 nm, (<b>d</b>) AΕ related to β (532/1064 nm). R1–R4 correspond to the four subregions.</p>
Full article ">
20 pages, 45309 KiB  
Article
Optical Dual Laser Based Sensor Denoising for OnlineMetal Sheet Flatness Measurement Using Hermite Interpolation
by Marcos Alonso, Alberto Izaguirre, Imanol Andonegui and Manuel Graña
Sensors 2020, 20(18), 5441; https://doi.org/10.3390/s20185441 - 22 Sep 2020
Cited by 4 | Viewed by 3628
Abstract
Flatness sensors are required for quality control of metal sheets obtained from steel coils by roller leveling and cutting systems. This article presents an innovative system for real-time robust surface estimation of flattened metal sheets composed of two line lasers and a conventional [...] Read more.
Flatness sensors are required for quality control of metal sheets obtained from steel coils by roller leveling and cutting systems. This article presents an innovative system for real-time robust surface estimation of flattened metal sheets composed of two line lasers and a conventional 2D camera. Laser plane triangulation is used for surface height retrieval along virtual surface fibers. The dual laser allows instantaneous robust and quick estimation of the fiber height derivatives. Hermite cubic interpolation along the fibers allows real-time surface estimation and high frequency noise removal. Noise sources are the vibrations induced in the sheet by its movements during the process and some mechanical events, such as cutting into separate pieces. The system is validated on synthetic surfaces that simulate the most critical noise sources and on real data obtained from the installation of the sensor in an actual steel mill. In the comparison with conventional filtering methods, we achieve at least a 41% of improvement in the accuracy of the surface reconstruction. Full article
Show Figures

Figure 1

Figure 1
<p>Flowchart of the computational process carried out by the sensor.</p>
Full article ">Figure 2
<p>General configuration of a laser triangulation system.</p>
Full article ">Figure 3
<p>Design of the laser-camera sensor featuring two parallel laser lines allowing the computation of surface height and its gradient. The color of the representation is not related to the actual laser color, which is the same for both laser sources.</p>
Full article ">Figure 4
<p>A simplified block diagram showing an industrial finishing line. The flatness sensor lies between the roll leveler and the cutting station.</p>
Full article ">Figure 5
<p>Close view of the devised sensor comprised of two linear lasers and a camera, installed over an industrial steel roll leveler processing line.</p>
Full article ">Figure 6
<p>Scheme showing the signal partition scheme followed to allow Hermite splines’ interpolation.</p>
Full article ">Figure 7
<p>A noise-free synthetic surface showing center buckles and wavy edge defects.</p>
Full article ">Figure 8
<p>The synthetic surface after adding the effect of vibrations induced by different mechanical sources such as the shearing station (noise sources added).</p>
Full article ">Figure 9
<p>3D representation of the theoretical surface results after applying the proposed filtering method.</p>
Full article ">Figure 10
<p>Visualization as a grayscale image of the synthetic surface results of the interpolation with Hermite splines. (<b>a</b>) Synthetic surface corrupted with high frequency noisy vibrations; (<b>b</b>) filtered surface; (<b>c</b>) a close up view of a region in both images. Intensity corresponds to height relative to the mean of the surface. White = positive; dark = negative.</p>
Full article ">Figure 11
<p>(<b>a</b>) Sensor raw data for a S235JR steel coil with observed high frequency transient noisy waves and background noise. (<b>b</b>) Denoised sensor data using the proposed Hermite interpolation filtering method.</p>
Full article ">Figure 12
<p>(<b>a</b>) Sensor raw data for a S500MC high yield steel coil with with observed periodic transient impulses and background noise. (<b>b</b>) Denoising results of the sensor raw data using the proposed Hermite interpolation filtering method.</p>
Full article ">Figure 13
<p>An instance of longitudinal fiber reconstruction (light blue line) by Hermite interpolation from the raw data of <a href="#sensors-20-05441-f011" class="html-fig">Figure 11</a> (dark line).</p>
Full article ">
32 pages, 790 KiB  
Article
A Modified Genetic Algorithm with Local Search Strategies and Multi-Crossover Operator for Job Shop Scheduling Problem
by Monique Simplicio Viana, Orides Morandin Junior and Rodrigo Colnago Contreras
Sensors 2020, 20(18), 5440; https://doi.org/10.3390/s20185440 - 22 Sep 2020
Cited by 43 | Viewed by 5590
Abstract
It is not uncommon for today’s problems to fall within the scope of the well-known class of NP-Hard problems. These problems generally do not have an analytical solution, and it is necessary to use meta-heuristics to solve them. The Job Shop Scheduling Problem [...] Read more.
It is not uncommon for today’s problems to fall within the scope of the well-known class of NP-Hard problems. These problems generally do not have an analytical solution, and it is necessary to use meta-heuristics to solve them. The Job Shop Scheduling Problem (JSSP) is one of these problems, and for its solution, techniques based on Genetic Algorithm (GA) form the most common approach used in the literature. However, GAs are easily compromised by premature convergence and can be trapped in a local optima. To address these issues, researchers have been developing new methodologies based on local search schemes and improvements to standard mutation and crossover operators. In this work, we propose a new GA within this line of research. In detail, we generalize the concept of a massive local search operator; we improved the use of a local search strategy in the traditional mutation operator; and we developed a new multi-crossover operator. In this way, all operators of the proposed algorithm have local search functionality beyond their original inspirations and characteristics. Our method is evaluated in three different case studies, comprising 58 instances of literature, which prove the effectiveness of our approach compared to traditional JSSP solution methods. Full article
Show Figures

Figure 1

Figure 1
<p>Examples of chromosomes in representation by operation order.</p>
Full article ">Figure 2
<p>Comparison between the steps of the Order-based Crossover (OX2), on the left, and Partially Mapped Crossover (PMX), on the right, crossover techniques.</p>
Full article ">Figure 3
<p>Scheme of mutation functions.</p>
Full article ">Figure 4
<p>Flow chart of our proposed Multi-Crossover Local Search Genetic Algorithm.</p>
Full article ">Figure 5
<p>Box plots of the fitness values of 35 executions of the different configuration of our method.</p>
Full article ">Figure 6
<p>Box plots of the fitness values of 35 executions of the GA-Like methods.</p>
Full article ">Figure 7
<p>Convergence curves of GA-like methods over 100 generations.</p>
Full article ">
19 pages, 862 KiB  
Article
Addressing the Security Gap in IoT: Towards an IoT Cyber Range
by Oliver Nock, Jonathan Starkey and Constantinos Marios Angelopoulos
Sensors 2020, 20(18), 5439; https://doi.org/10.3390/s20185439 - 22 Sep 2020
Cited by 13 | Viewed by 5906
Abstract
The paradigm of Internet of Things has now reached a maturity level where the pertinent research goal is the successful application of IoT technologies in systems of high technological readiness level. However, while basic aspects of IoT connectivity and networking have been well [...] Read more.
The paradigm of Internet of Things has now reached a maturity level where the pertinent research goal is the successful application of IoT technologies in systems of high technological readiness level. However, while basic aspects of IoT connectivity and networking have been well studied and adequately addressed, this has not been the case for cyber security aspects of IoT. This is nicely demonstrated by the number of IoT testbeds focusing on networking aspects and the lack of IoT testbeds focusing on security aspects. Towards addressing the existing and growing skills-shortage in IoT cyber security, we present an IoT Cyber Range (IoT-CR); an IoT testbed designed for research and training in IoT security. The IoT-CR allows the user to specify and work on customisable IoT networks, both virtual and physical, and supports the concurrent execution of multiple scenarios in a scalable way following a modular architecture. We first provide an overview of existing, state of the art IoT testbeds and cyber security related initiatives. We then present the design and architecture of the IoT Cyber Range, also detailing the corresponding RESTful APIs that help de-associate the IoT-CR tiers and obfuscate underlying complexities. The design is focused around the end-user and is based on the four design principles for Cyber Range development discussed in the introduction. Finally, we demonstrate the use of the facility via a red/blue team scenario involving a variant of man-in-the-middle attack using IoT devices. Future work includes the use of the IoT-CR by cohorts of trainees in order to evaluate the effectiveness of specific scenarios in acquiring IoT-related cyber-security knowledge and skills, as well as the IoT-CR integration with a pan-European cyber-security competence network. Full article
(This article belongs to the Special Issue Sensors Cybersecurity)
Show Figures

Figure 1

Figure 1
<p>Job Queueing Sequence Diagram.</p>
Full article ">Figure 2
<p>Log Retrieval Sequence Diagram.</p>
Full article ">Figure 3
<p>Retrieving Available Nodes Sequence Diagram.</p>
Full article ">Figure 4
<p>System UML Block Diagram.</p>
Full article ">Figure 5
<p>System Architecture of the IoT Cyber Range.</p>
Full article ">Figure 6
<p>Log Retrieval Sequence Diagram.</p>
Full article ">Figure 7
<p>The home screen of the wrapper.</p>
Full article ">Figure 8
<p>Simulation Script Editor with example JavaScript Code.</p>
Full article ">Figure 9
<p>Ten-node network depicting the token modification and exchange.</p>
Full article ">Figure 10
<p>Python Wrapper Screenshots. (<b>a</b>) The signing up process. (<b>b</b>) The signing in process. (<b>c</b>) The creation of a scenario with parameters. (<b>d</b>) Topology page. You can see the scenario topology file already present. (<b>e</b>) Job creation. The user creates the job from the already uploaded files. (<b>f</b>) Job schedule. The user can enable the job to run, in doing so it’s state changes to finished. (<b>g</b>) Downloading the logs. A user can download all logs for each job. (<b>h</b>) The signing out process. You can see the username changes to NONE.</p>
Full article ">Figure 10 Cont.
<p>Python Wrapper Screenshots. (<b>a</b>) The signing up process. (<b>b</b>) The signing in process. (<b>c</b>) The creation of a scenario with parameters. (<b>d</b>) Topology page. You can see the scenario topology file already present. (<b>e</b>) Job creation. The user creates the job from the already uploaded files. (<b>f</b>) Job schedule. The user can enable the job to run, in doing so it’s state changes to finished. (<b>g</b>) Downloading the logs. A user can download all logs for each job. (<b>h</b>) The signing out process. You can see the username changes to NONE.</p>
Full article ">Figure 11
<p>Example of communication with the user via email informing of log availability.</p>
Full article ">
15 pages, 10788 KiB  
Article
Moving Auto-Correlation Window Approach for Heart Rate Estimation in Ballistocardiography Extracted by Mattress-Integrated Accelerometers
by Marco Laurino, Danilo Menicucci, Angelo Gemignani, Nicola Carbonaro and Alessandro Tognetti
Sensors 2020, 20(18), 5438; https://doi.org/10.3390/s20185438 - 22 Sep 2020
Cited by 9 | Viewed by 3849
Abstract
Continuous heart monitoring is essential for early detection and diagnosis of cardiovascular diseases, which are key factors for the evaluation of health status in the general population. Therefore, in the future, it will be increasingly important to develop unobtrusive and transparent cardiac monitoring [...] Read more.
Continuous heart monitoring is essential for early detection and diagnosis of cardiovascular diseases, which are key factors for the evaluation of health status in the general population. Therefore, in the future, it will be increasingly important to develop unobtrusive and transparent cardiac monitoring technologies for the population. The possible approaches are the development of wearable technologies or the integration of sensors in daily-life objects. We developed a smart bed for monitoring cardiorespiratory functions during the night or in the case of continuous monitoring of bedridden patients. The mattress includes three accelerometers for the estimation of the ballistocardiogram (BCG). BCG signal is generated due to the vibrational activity of the body in response to the cardiac ejection of blood. BCG is a promising technique but is usually replaced by electrocardiogram due to the difficulty involved in detecting and processing the BCG signals. In this work, we describe a new algorithm for heart parameter extraction from the BCG signal, based on a moving auto-correlation sliding-window. We tested our method on a group of volunteers with the simultaneous co-registration of electrocardiogram (ECG) using a single-lead configuration. Comparisons with ECG reference signals indicated that the algorithm performed satisfactorily. The results presented demonstrate that valuable cardiac information can be obtained from the BCG signal extracted by low cost sensors integrated in the mattress. Thus, a continuous unobtrusive heart-monitoring through a smart bed is now feasible. Full article
(This article belongs to the Special Issue Emerging Wearable Sensor Technology in Healthcare)
Show Figures

Figure 1

Figure 1
<p>Smart-Bed prototype: (<b>a</b>) Functional blocks of the Smart-Bed prototype. Docking station (DS) is in the red frame, physiological data collector (PDC) in the yellow frame and environmental data collector (EDC) in the green frame; (<b>b</b>) Position of the three accelerometers (<span class="html-italic">a</span><sub>1</sub>, <span class="html-italic">a</span><sub>2</sub> and <span class="html-italic">a</span><sub>3</sub>) over the pressure mapping layer (PML).</p>
Full article ">Figure 2
<p>Figure algorithm.</p>
Full article ">Figure 3
<p>For each subject, the HR extracted from BCG (red lines) and HR from ECG (black lines) are reported. The correspondent mean absolute errors (MAE) and root mean squared errors (RMSE) are shown.</p>
Full article ">Figure 4
<p>For each subject, the scatter plots and linear regression lines for HR estimated from BCG and ECG are reported. The correspondent coefficient of regression (R) and <span class="html-italic">p</span>-values (<span class="html-italic">p</span>) are shown.</p>
Full article ">Figure 5
<p>Linear regression plot and Bland-Altman plot of all the analysed heartbeats for ECG and BCG. The heartbeats are pooled over all the recordings. In the plots are reported: linear regression equation, squared Pearson r-value (<math display="inline"><semantics> <msup> <mi>r</mi> <mn>2</mn> </msup> </semantics></math>), root mean squared error (RMSE), number of heartbeats analysed (n), reproducibility coefficient (RPC) and coefficient of variation expressed as a percentage (CV). The lines of mean of difference between ECG and BCG, and <math display="inline"><semantics> <mrow> <mn>95</mn> <mo>%</mo> </mrow> </semantics></math> limits of agreement are reported.</p>
Full article ">Figure 6
<p>Comparative table of HR and HRV parameters between the estimation from ECG and BCG: mean RR interval (meanRR, in msec), standard deviation of RR intervals (stdRR, in msec), standard deviation of differences between adjacent RR intervals (stdDRR, in msec), root mean square of differences of successive RR intervals (rmsDRR, in msec), number of successive difference of intervals which differ by more of 50 msec divided by the total number of RR intervals (pnn50, adimensional number), absolute power of the low-frequency band (0.04–0.15 Hz-LF, in dB), absolute power of the high-frequency band (0.15–0.4 Hz-HF, in dB) and the ratio of absolute powers between low-frequency and high-frequency bands (LF/HF, adimensional number). The mean differences (mDs), the <span class="html-italic">p</span>-values (<span class="html-italic">p</span>-values) of mean differences and Pearson’s correlation coefficients (R) between the BCG and ECG estimations for all parameters are also reported.</p>
Full article ">Figure 7
<p>For each subject, the Power Spectrum Densities (PSD) of tachograms from BCG (red lines) and ECG (black lines) are reported. The PSD are expressed in dB.</p>
Full article ">Figure A1
<p>Linear regression plot and Bland-Altman plot of the analysed heartbeats for ECG and BCG considering the temporal lengths of 2 s for the epochs. The heartbeats are pooled over all the recordings. In the plots are reported: linear regression equation, squared Pearson r-value (<math display="inline"><semantics> <msup> <mi>r</mi> <mn>2</mn> </msup> </semantics></math>), root mean squared error (RMSE), number of heartbeats analysed (n), reproducibility coefficient (RPC) and coefficient of variation expressed as a percentage (CV). The lines of mean of difference between ECG and BCG, and <math display="inline"><semantics> <mrow> <mn>95</mn> <mo>%</mo> </mrow> </semantics></math> limits of agreement are reported.</p>
Full article ">Figure A2
<p>Linear regression plot and Bland-Altman plot of the the analysed heartbeats for ECG and BCG considering the temporal lengths of 5 s for the epochs. The heartbeats are pooled over all the recordings. In the plots are reported: linear regression equation, squared Pearson r-value (<math display="inline"><semantics> <msup> <mi>r</mi> <mn>2</mn> </msup> </semantics></math>), root mean squared error (RMSE), number of heartbeats analysed (n), reproducibility coefficient (RPC) and coefficient of variation expressed as a percentage (CV). The lines of mean of difference between ECG and BCG, and <math display="inline"><semantics> <mrow> <mn>95</mn> <mo>%</mo> </mrow> </semantics></math> limits of agreement are reported.</p>
Full article ">
19 pages, 3113 KiB  
Article
Comprehensive Evaluation on Space Information Network Demonstration Platform Based on Tracking and Data Relay Satellite System
by Feng Liu, Dingyuan Shi, Yunlu Xiao, Tao Zhang and Jie Sun
Sensors 2020, 20(18), 5437; https://doi.org/10.3390/s20185437 - 22 Sep 2020
Cited by 4 | Viewed by 2402
Abstract
Due to the global coverage and real-time access advantages of the Tracking and Data Relay Satellite System (TDRSS), the demonstration platform based on TDRSS can satisfy the new technology verification and demonstration needs of the space information network (evolution from sensorweb). However, the [...] Read more.
Due to the global coverage and real-time access advantages of the Tracking and Data Relay Satellite System (TDRSS), the demonstration platform based on TDRSS can satisfy the new technology verification and demonstration needs of the space information network (evolution from sensorweb). However, the comprehensive evaluation research of this demonstration platform faces many problems: complicated and diverse technical indicators in various areas, coupling redundancy between indicators, difficulty in establishing the number of indicator system layers, and evaluation errors causing by subjective scoring. Concerning the difficulties, this paper gives a method to construct this special index system, and improves the consistency of evaluation results with Analytic Hierarchy Process in Group Decision-Making (AHP-GDM). A comprehensive evaluation index system including five criterions, 11 elements, more than 30 indicators is constructed according to the three-step strategy of initial set classification, hierarchical optimization, and de-redundancy. For the inconsistent scoring of AHP-GDM, a high-speed convergence consistency improvement strategy is proposed in this paper. Moreover, a method for generating a comprehensive judgment matrix (the aggregation of each judgment matrix) aggregation coefficient is provided. Numerical experiments show that this strategy effectively improves the consistency of the comprehensive judgment matrix. Finally, taking the evaluation of TDRSS development as an example, the versatility and feasibility of the new evaluation strategy are demonstrated. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>The logical diagram of this paper.</p>
Full article ">Figure 2
<p>Construction solution of index system.</p>
Full article ">Figure 3
<p>The processing flow of system reconstruction.</p>
Full article ">Figure 4
<p>Comprehensive evaluation index system.</p>
Full article ">Figure 5
<p>The flow chart of this enhancement strategy.</p>
Full article ">Figure 6
<p>Comparison diagram of three consistency improvement methods.</p>
Full article ">Figure 7
<p>Eigenvalue trend of two matrices’s convex combination.</p>
Full article ">Figure 8
<p>The effectiveness comparison in radar chart.</p>
Full article ">
19 pages, 5126 KiB  
Article
Low Cost, High Performance, 16-Channel Microwave Measurement System for Tomographic Applications
by Paul Meaney, Alexander Hartov, Timothy Raynolds, Cynthia Davis, Sebastian Richter, Florian Schoenberger, Shireen Geimer and Keith Paulsen
Sensors 2020, 20(18), 5436; https://doi.org/10.3390/s20185436 - 22 Sep 2020
Cited by 9 | Viewed by 3453
Abstract
We have developed a multichannel software defined radio-based transceiver measurement system for use in general microwave tomographic applications. The unit is compact enough to fit conveniently underneath the current illumination tank of the Dartmouth microwave breast imaging system. The system includes 16 channels [...] Read more.
We have developed a multichannel software defined radio-based transceiver measurement system for use in general microwave tomographic applications. The unit is compact enough to fit conveniently underneath the current illumination tank of the Dartmouth microwave breast imaging system. The system includes 16 channels that can both transmit and receive and it operates from 500 MHz to 2.5 GHz while measuring signals down to −140 dBm. As is the case with multichannel systems, cross-channel leakage is an important specification and must be lower than the noise floors for each receiver. This design exploits the isolation inherent when the individual receivers for each channel are physically separate; however, these challenging specifications require more involved signal isolation techniques at both the system design level and the individual, shielded component level. We describe the isolation design techniques for the critical system elements and demonstrate specification compliance at both the component and system level. Full article
(This article belongs to the Special Issue Microwave Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the complete system illustrating: (<b>a</b>) transmitting B210 board, (<b>b</b>) transmitting SP16T switch, (<b>c</b>) 16 switch/amplifier modules, (<b>d</b>) eight dedicated receive B210s, (<b>e</b>) switch module for the reference signal, and (<b>f</b>) 1-by-8 power dividers for the reference signal, respectively.</p>
Full article ">Figure 2
<p>Photographs of the B210 USRP circuit board (<b>a</b>) without and (<b>b</b>) with a commercial cover.</p>
Full article ">Figure 3
<p>Photograph of the B210 USRP circuit board mounted inside its custom shielded housing and associated cover, which exhibits a central ridge that isolates RF fields from the digital portion of the circuitry.</p>
Full article ">Figure 4
<p>Photograph of the switch/amplifier module illustrating compartmentalization of the single-pole/single-throw (SPST; left), single-pole/double-throw (SPDT; lower right), and low noise amplifier (LNA; upper right), and associated cover with raised surfaces.</p>
Full article ">Figure 5
<p>(<b>a</b>) Photograph of a test illumination chamber and (<b>b</b>) schematic diagram of the imaging field-of-view with 16 monopole antennas and the presence of a yellow test object.</p>
Full article ">Figure 6
<p>Photographs of the microwave electronic subsystem showing: (<b>a</b>) a complete system fully assembled external to the imaging system, (<b>b</b>) electronics with the grouping of eight shielded B210s (bottom) and switch/amplifier modules (top; USB hubs removed to expose componentry), respectively, and (<b>c</b>) a complete system integrated below the imaging tank and supported by the antenna array mounting plate.</p>
Full article ">Figure 7
<p>Photograph of the enclosed B210 with labels indicating probe measurement sites.</p>
Full article ">Figure 8
<p>Plots of the switch/amplifier insertion loss (or gain) for the transmission mode (between the Tx and Ant ports), and receive mode (between the Ant and Rx ports). Leakage between transmit and receive ports while operational in the receive mode is shown for the completely shielded housing and the compartmentalized housing without cover, respectively.</p>
Full article ">Figure 9
<p>SolidWorks 3D rendering of the switch amplifier housing with four measurement sites.</p>
Full article ">Figure 10
<p>Isolation levels measured at receivers for signal transmission from Channel 1 at 7 frequencies when remaining channels were activated in the receive mode and antenna ports were terminated with a 50 Ω matched load. Except for the 700 MHz case, all values are less than or equal to −135 dB.</p>
Full article ">Figure 11
<p>Plots of measurement data for antenna 1 transmission including cases with varying levels of added leakage signal: (<b>a</b>) raw magnitude, (<b>b</b>) calibrated magnitude, and (<b>c</b>) calibrated phases. The dashed black line represents the measured signal for the homogeneous bath whereas the solid black line indicates results when the object is present, respectively. Colored lines symbolize signals when the object is present but with progressively increasing leakage added.</p>
Full article ">Figure 12
<p>Reconstructed relative permittivity (<b>top</b>) and conductivity (<b>bottom</b>) at 1100 MHz for (<b>a</b>) no signal leakage, and (<b>b</b>–<b>e</b>) signal leakage of −130 dB, −120 dB, −110 dB, and −100 dB, respectively, of the square object depicted in <a href="#sensors-20-05436-f005" class="html-fig">Figure 5</a> for the 14.2 cm diameter field of view.</p>
Full article ">Figure 13
<p>Horizontal transects through the 1100 MHz (<b>a</b>) permittivity and (<b>b</b>) conductivity images shown in <a href="#sensors-20-05436-f012" class="html-fig">Figure 12</a>.</p>
Full article ">
26 pages, 11062 KiB  
Article
High Density Real-Time Air Quality Derived Services from IoT Networks
by Claudio Badii, Stefano Bilotta, Daniele Cenni, Angelo Difino, Paolo Nesi, Irene Paoli and Michela Paolucci
Sensors 2020, 20(18), 5435; https://doi.org/10.3390/s20185435 - 22 Sep 2020
Cited by 13 | Viewed by 3663
Abstract
In recent years, there is an increasing attention on air quality derived services for the final users. A dense grid of measures is needed to implement services such as conditional routing, alerting on data values for personal usage, data heatmaps for Dashboards in [...] Read more.
In recent years, there is an increasing attention on air quality derived services for the final users. A dense grid of measures is needed to implement services such as conditional routing, alerting on data values for personal usage, data heatmaps for Dashboards in control room for the operators, and for web and mobile applications for the city users. Therefore, the challenge consists of providing high density data and services starting from scattered data and regardless of the number of sensors and their position to a large number of users. To this aim, this paper is focused on providing an integrated solution addressing at the same time multiple aspects: To create and optimize algorithms for data interpolation (creating regular data from scattered), making it possible to cope with the scalability and providing support for on demand services to provide air quality data in any point of the city with dense data. To this end, the accuracy of different interpolation algorithms has been evaluated comparing the results with respect to real values. In addition, the trends of heatmaps interpolation errors have been exploited to detected devices’ dysfunctions. Such anomalies may often be useful to request a maintenance action. The solution proposed has been integrated as a Micro Services providing data analytics in a data flow real time process based on Node.JS Node-RED, called in the paper IoT Applications. The specific case presented in this paper refers to the data and the solution of Snap4City for Helsinki. Snap4City, which has been developed as a part of Select4Cities PCP of the European Commission, and it is presently used in a number of cities and areas in Europe. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

Figure 1
<p>Overview of Snap4City Functional Architecture.</p>
Full article ">Figure 2
<p>Workflow for the data computation vs. services. Green blocks are implemented as IoT Applications, cyan are tools which expose services via Smart City Application Programming Interfaces (APIs), white blocks are some of the data. Scattered IoT Sensor Data are collected via Smart City API on the Big Data Storage. Please note that all the needed services have been depicted to make the figure readable and focused.</p>
Full article ">Figure 3
<p>Helsinki Jätkäsaari <math display="inline"><semantics> <mrow> <mi>P</mi> <msub> <mi>M</mi> <mrow> <mn>10</mn> </mrow> </msub> </mrow> </semantics></math> average trend for weekend and working days per hour.</p>
Full article ">Figure 4
<p>(<b>a</b>) Dashboard on Helsinki environmental aspects, the map presents an high density of sensors in the Jätkäsaari Island, (<b>b</b>) air quality <math display="inline"><semantics> <mrow> <mi>P</mi> <msub> <mi>M</mi> <mrow> <mn>10</mn> </mrow> </msub> </mrow> </semantics></math> interpolation heatmap for a small area of Jätkäsaari Island (Inverse Distance Weighting (IDW) method), transparency set at 43%. The legend describes a colour map of 9 colours, while the heatmap is produced with a colour map with 150 colours.</p>
Full article ">Figure 5
<p>(<b>a</b>) IDW interpolation method visualization of the entire Helsinki area (regular map containing all the sensors) in the map and (<b>b</b>) Akima interpolation method visualization (irregular map—some sensors are not included in the colored area because they do not produce a measure).</p>
Full article ">Figure 6
<p>Mobile App Helsinki in a Snap, visualization of Heatmap and subscription to alerts.</p>
Full article ">Figure 7
<p>Data gathering on Snap4City Broker: (<b>a</b>,<b>b</b>) Static data related to each air quality sensor, called IoT Device in the Snap4City Platform; (<b>c</b>) History view related to the Real Time data: PM10 (and PM2.5), with frequency of 1 min.</p>
Full article ">Figure 8
<p><math display="inline"><semantics> <mrow> <mi>P</mi> <msub> <mi>M</mi> <mrow> <mn>10</mn> </mrow> </msub> </mrow> </semantics></math> working days root mean squared error (RMSE) (<b>a</b>) and mean absolute percentage error (MAPE) (<b>b</b>) per time slot (Akima Method).</p>
Full article ">Figure 9
<p><math display="inline"><semantics> <mrow> <mi>P</mi> <msub> <mi>M</mi> <mrow> <mn>10</mn> </mrow> </msub> </mrow> </semantics></math> working days RMSE (<b>a</b>) and MAPE (<b>b</b>) per time slots (IDW Method).</p>
Full article ">Figure 10
<p><math display="inline"><semantics> <mrow> <mi>P</mi> <msub> <mi>M</mi> <mrow> <mn>10</mn> </mrow> </msub> </mrow> </semantics></math> working days error box plots per device (Akima Method) at 03:00 (<b>a</b>), 10:00 (<b>b</b>) and 16:00 (<b>c</b>).</p>
Full article ">Figure 11
<p><math display="inline"><semantics> <mrow> <mi>P</mi> <msub> <mi>M</mi> <mrow> <mn>10</mn> </mrow> </msub> </mrow> </semantics></math> working days error box plots per device (IDW Method) at 03:00 (<b>a</b>), 10:00 (<b>b</b>) and 16:00 (<b>c</b>).</p>
Full article ">Figure 12
<p>Air quality <math display="inline"><semantics> <mrow> <mi>P</mi> <msub> <mi>M</mi> <mrow> <mn>10</mn> </mrow> </msub> </mrow> </semantics></math> working days interpolation error trends per hour in terms of mean absolute percentage error for (<b>a</b>) six personal devices including the device with a dysfunction; (<b>b</b>) five personal devices.</p>
Full article ">Figure 13
<p>Node-RED workflow for automatization of heatmap production.</p>
Full article ">
19 pages, 3733 KiB  
Review
MXenes-Based Bioanalytical Sensors: Design, Characterization, and Applications
by Reem Khan and Silvana Andreescu
Sensors 2020, 20(18), 5434; https://doi.org/10.3390/s20185434 - 22 Sep 2020
Cited by 82 | Viewed by 7575
Abstract
MXenes are recently developed 2D layered nanomaterials that provide unique capabilities for bioanalytical applications. These include high metallic conductivity, large surface area, hydrophilicity, high ion transport properties, low diffusion barrier, biocompatibility, and ease of surface functionalization. MXenes are composed of transition metal carbides, [...] Read more.
MXenes are recently developed 2D layered nanomaterials that provide unique capabilities for bioanalytical applications. These include high metallic conductivity, large surface area, hydrophilicity, high ion transport properties, low diffusion barrier, biocompatibility, and ease of surface functionalization. MXenes are composed of transition metal carbides, nitrides, or carbonitrides and have a general formula Mn+1Xn, where M is an early transition metal while X is carbon and/or nitrogen. Due to their unique features, MXenes have attracted significant attention in fields such as clean energy production, electronics, fuel cells, supercapacitors, and catalysis. Their composition and layered structure make MXenes attractive for biosensing applications. The high conductivity allows these materials to be used in the design of electrochemical biosensors and the multilayered configuration makes them an efficient immobilization matrix for the retention of activity of the immobilized biomolecules. These properties are applicable to many biosensing systems and applications. This review describes the progress made on the use and application of MXenes in the development of electrochemical and optical biosensors and highlights future needs and opportunities in this field. In particular, opportunities for developing wearable sensors and systems with integrated biomolecule recognition are highlighted. Full article
(This article belongs to the Special Issue Biosensors – Recent Advances and Future Challenges)
Show Figures

Figure 1

Figure 1
<p>(Left) Schematic for the exfoliation process of MAX phases and formation of MXenes. (right) SEM micrographs for (<b>A</b>) Ti<sub>3</sub>AlC<sub>2</sub> MAX phases, (<b>B</b>) Ti<sub>3</sub>AlC<sub>2</sub> after HF treatment, (<b>C</b>) Ti<sub>2</sub>AlC after HF treatment, (<b>D</b>) Ta<sub>4</sub>AlC<sub>3</sub> after HF treatment, (<b>E</b>) TiNbAlC after HF treatment, and (<b>F</b>) Ti<sub>3</sub>AlCN after HF treatment. Reprint with permission [<a href="#B2-sensors-20-05434" class="html-bibr">2</a>]. Copyright © 2012, American Chemical Society.</p>
Full article ">Figure 2
<p>Summary of different classes of biosensing platforms based on MXenes and their applications. Immunosensors (reprinted with permission from [<a href="#B62-sensors-20-05434" class="html-bibr">62</a>]) Aptasensors (reprinted with permission from [<a href="#B63-sensors-20-05434" class="html-bibr">63</a>]), Enzyme sensors (reprinted with permission from [<a href="#B64-sensors-20-05434" class="html-bibr">64</a>]).</p>
Full article ">Figure 3
<p>(<b>A</b>) Schematic illustration of sensor fabrication; (<b>a</b>) exfoliation of Ti<sub>3</sub>AlC<sub>2</sub> via etching with HF; (<b>b</b>) delamination with TBAOH; (<b>c</b>) modifying the glassy carbon electrode with MXene; (<b>d</b>) loading of glucose oxidase (GOx), and; (<b>e</b>) cross-linking glutaraldehyde (GTA) with GOx; (<b>f</b>) glucose detection mechanism of the proposed biosensing system. (<b>B</b>) Chronoamperometry data (<b>a</b>) and calibration plot (<b>b</b>) for Ti<sub>3</sub>C<sub>2</sub>–HF/TBA-based electrochemical glucose biosensor conducted using FcMeOH (2 mM) in pH 7.2 PBS (electrolyte) and 0.15 V potential (reproduced with permission from reference [<a href="#B64-sensors-20-05434" class="html-bibr">64</a>]).</p>
Full article ">Figure 4
<p>Preparation of (<b>A</b>) Ti<sub>3</sub>C<sub>2</sub>T<sub>x</sub> MNS; (<b>B</b>) pure Ti<sub>3</sub>C<sub>2</sub>T<sub>x</sub> film, pure graphene film, and MG hybrid film for enzyme immobilization (reproduced with permission from reference [<a href="#B66-sensors-20-05434" class="html-bibr">66</a>]).</p>
Full article ">Figure 5
<p>Illustration of the formation of MnO<sub>2</sub>/Mn<sub>3</sub>O<sub>4</sub> composite (<b>A</b>), MXene/Au NPs (<b>B</b>), the fabrication process of AChE-Chit/MXene/Au NPs/MnO<sub>2</sub>/Mn<sub>3</sub>O<sub>4</sub>/GCE biosensor for methamidophos assay (<b>C</b>) (reproduced with permission from reference [<a href="#B74-sensors-20-05434" class="html-bibr">74</a>]).</p>
Full article ">Figure 6
<p>Schematic of the electrochemical carcinoembryonic antigen (CEA) detection mechanism (reproduced with permission from reference [<a href="#B62-sensors-20-05434" class="html-bibr">62</a>]).</p>
Full article ">Figure 7
<p>Analytical performance of the developed immunoassay (<b>A</b>) capacitance responses of Ti<sub>3</sub>C<sub>2</sub> MXene-based interdigitated immunosensor toward target PSA standards; (<b>B</b>) the corresponding calibration plots; and (<b>C</b>) the specificity of capacitance immunosensor against target PSA and non-targets including AFP, CEA, and CA 125 (reproduced with permission from reference [<a href="#B86-sensors-20-05434" class="html-bibr">86</a>]).</p>
Full article ">Figure 8
<p>Schematic diagram of the aptasensor fabrication based on PPy@Ti<sub>3</sub>C<sub>2</sub>/PMo<sub>12</sub> for the OPN detection. (reproduced with permission from reference [<a href="#B63-sensors-20-05434" class="html-bibr">63</a>]).</p>
Full article ">Figure 9
<p>Schematic illustration of the developed fluorescence resonance energy (FRET)-based aptasensor (reprinted with permission from reference [<a href="#B94-sensors-20-05434" class="html-bibr">94</a>]).</p>
Full article ">Figure 10
<p>Schematic diagram of wearable sweat sensor (reprinted with permission from reference [<a href="#B102-sensors-20-05434" class="html-bibr">102</a>]).</p>
Full article ">
18 pages, 5117 KiB  
Article
The Influence of Camera and Optical System Parameters on the Uncertainty of Object Location Measurement in Vision Systems
by Jacek Skibicki, Anna Golijanek-Jędrzejczyk and Ariel Dzwonkowski
Sensors 2020, 20(18), 5433; https://doi.org/10.3390/s20185433 - 22 Sep 2020
Cited by 10 | Viewed by 3898
Abstract
The article presents the influence of the camera and its optical system on the uncertainty of object position measurement in vision systems. The aim of the article is to present the methodology for estimating the combined standard uncertainty of measuring the object position [...] Read more.
The article presents the influence of the camera and its optical system on the uncertainty of object position measurement in vision systems. The aim of the article is to present the methodology for estimating the combined standard uncertainty of measuring the object position with a vision camera treated as a measuring device. The identification of factors affecting the location measurement uncertainty and the determination of their share in the combined standard uncertainty will allow determining the parameters of the camera operation, so that the expanded uncertainty is as small as possible in the given measurement conditions. The analysis of the uncertainty estimation presented in the article was performed with the assumption that there is no influence of any external factors (e.g., temperature, humidity, or vibrations). Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

Figure 1
<p>Scheme of the measurement stand (top view): <span class="html-italic">k</span>—distance between the object plane and the image plane (image sensor); <span class="html-italic">F</span>—distance between the plane of the optical centre of the lens and the image plane; <span class="html-italic">x</span>—displacement distance of the object measured in the horizontal axis in relation to the optical axis of the lens; <span class="html-italic">x</span>′—location of the image of the object measured in the horizontal axis.</p>
Full article ">Figure 2
<p>The subject of the analysis.</p>
Full article ">Figure 3
<p>View of measuring stand.</p>
Full article ">Figure 4
<p>Influence of the image registration parameters: (<b>a</b>) stochastic distribution of the obtained measurement result depending on the brightness level of the recorded image; (<b>b</b>) uncertainty for the measurement in the horizontal axis <span class="html-italic">u</span><sub>reg</sub>(<span class="html-italic">x</span>′) and vertical axis <span class="html-italic">u</span><sub>reg</sub>(<span class="html-italic">y</span>′) in the function of changing the brightness of the recorded image.</p>
Full article ">Figure 5
<p>Stochastic distribution of the obtained results as a function of sensor sensitivity level.</p>
Full article ">Figure 6
<p>Dependence between uncertainty <span class="html-italic">u</span><sub>sns</sub> of measurement results from actual sensor sensitivity: (<b>a</b>) in horizontal axis <span class="html-italic">u</span><sub>sns</sub>(<span class="html-italic">x</span>′); (<b>b</b>) in vertical axis <span class="html-italic">u</span><sub>sns</sub>(<span class="html-italic">y</span>′).</p>
Full article ">Figure 7
<p>Stochastic deviation of measurement results as a function of lens focal length.</p>
Full article ">Figure 8
<p>Standard measurement uncertainty as a function of lens focal length: (<b>a</b>) measurement results in the horizontal axis; (<b>b</b>) measurement results in the vertical axis.</p>
Full article ">Figure 9
<p>Standard uncertainty of measuring the position of the object image on the camera sensor as a function of changes in the acquisition image brightness and lens focal length: (<b>a</b>) measurement uncertainty in horizontal axis <span class="html-italic">x</span>; (<b>b</b>) measurement uncertainty in vertical axis <span class="html-italic">y</span>.</p>
Full article ">Figure 10
<p>Measurement of the vertical movements of an HV overhead power line conductor—the principle of measurement.</p>
Full article ">Figure 11
<p>Measurement results of vertical movements of an HV overhead power line conductor; (<b>a</b>) for optimal brightness level; (<b>b</b>) for brightness level lower by 1.5 EV than the optimal.</p>
Full article ">
26 pages, 3047 KiB  
Article
A Class-Independent Texture-Separation Method Based on a Pixel-Wise Binary Classification
by Lucas de Assis Soares, Klaus Fabian Côco, Patrick Marques Ciarelli and Evandro Ottoni Teatini Salles
Sensors 2020, 20(18), 5432; https://doi.org/10.3390/s20185432 - 22 Sep 2020
Cited by 3 | Viewed by 2990
Abstract
Texture segmentation is a challenging problem in computer vision due to the subjective nature of textures, the variability in which they occur in images, their dependence on scale and illumination variation, and the lack of a precise definition in the literature. This paper [...] Read more.
Texture segmentation is a challenging problem in computer vision due to the subjective nature of textures, the variability in which they occur in images, their dependence on scale and illumination variation, and the lack of a precise definition in the literature. This paper proposes a method to segment textures through a binary pixel-wise classification, thereby without the need for a predefined number of textures classes. Using a convolutional neural network, with an encoder–decoder architecture, each pixel is classified as being inside an internal texture region or in a border between two different textures. The network is trained using the Prague Texture Segmentation Datagenerator and Benchmark and tested using the same dataset, besides the Brodatz textures dataset, and the Describable Texture Dataset. The method is also evaluated on the separation of regions in images from different applications, namely remote sensing images and H&E-stained tissue images. It is shown that the method has a good performance on different test sets, can precisely identify borders between texture regions and does not suffer from over-segmentation. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>Network architecture.</p>
Full article ">Figure 2
<p>Texture images from the Prague Texture Segmentation Datagenerator and Benchmark [<a href="#B70-sensors-20-05432" class="html-bibr">70</a>].</p>
Full article ">Figure 3
<p>Texture images from the Brodatz dataset [<a href="#B82-sensors-20-05432" class="html-bibr">82</a>].</p>
Full article ">Figure 4
<p>Texture images from the Describable Textures Dataset [<a href="#B83-sensors-20-05432" class="html-bibr">83</a>].</p>
Full article ">Figure 5
<p>Example of mosaic structure and its associated borders.</p>
Full article ">Figure 6
<p>Example of mosaic structure and its associated borders.</p>
Full article ">Figure 7
<p>(<b>a</b>) An example mosaic, (<b>b</b>) the results of a multi-class labeling approach, (<b>c</b>) a HED architecture, and (<b>d</b>) the proposed method.</p>
Full article ">Figure 8
<p>(<b>a</b>) An example mosaic, (<b>b</b>) the results of a multi-class labeling approach, (<b>c</b>) a HED architecture, and (<b>d</b>) the proposed method.</p>
Full article ">Figure 9
<p>(<b>a</b>) An example mosaic, (<b>b</b>) the results of a multi-class labeling approach, (<b>c</b>) a HED architecture, and (<b>d</b>) the proposed method.</p>
Full article ">Figure 10
<p>(<b>a</b>) An example mosaic, (<b>b</b>) the results of a multi-class labeling approach, (<b>c</b>) a HED architecture, and (<b>d</b>) the proposed method.</p>
Full article ">Figure 11
<p>Precision–recall curves for the four different test sets.</p>
Full article ">Figure 12
<p>F-measure curves for the four different test sets.</p>
Full article ">Figure 13
<p>Pratt Figure of Merit curves for the four different test sets.</p>
Full article ">Figure 14
<p>Comparison on the Prague Texture Dataset Generator and Benchmark [<a href="#B70-sensors-20-05432" class="html-bibr">70</a>]. From the first to the fifth column, respectively: original images; ground truths; edges produced by the network in [<a href="#B67-sensors-20-05432" class="html-bibr">67</a>]; edges produced by the network in [<a href="#B69-sensors-20-05432" class="html-bibr">69</a>]; edges produced in the presented method. Letters (<b>a</b>–<b>c</b>) present different benchmark textures mosaics and are used for referencing the images in the text.</p>
Full article ">Figure 15
<p>Comparison on the Prague Texture Dataset Generator and Benchmark [<a href="#B70-sensors-20-05432" class="html-bibr">70</a>]. From the first to the fifth column, respectively: original images; ground truths; edges produced by the network in [<a href="#B67-sensors-20-05432" class="html-bibr">67</a>]; edges produced by the network in [<a href="#B69-sensors-20-05432" class="html-bibr">69</a>]; edges produced in the presented method. Letters (<b>a</b>–<b>f</b>) present different benchmark textures mosaics and are used for referencing the images in the text.</p>
Full article ">Figure 16
<p>Comparison on the Prague Texture Dataset Generator and Benchmark [<a href="#B70-sensors-20-05432" class="html-bibr">70</a>]. From the first to the fifth column, respectively: original images; ground truths; edges produced by the network in [<a href="#B67-sensors-20-05432" class="html-bibr">67</a>]; edges produced by the network in [<a href="#B69-sensors-20-05432" class="html-bibr">69</a>]; edges produced in the presented method (continuation). Letters (<b>a</b>–<b>f</b>) present different benchmark textures mosaics and are used for referencing the images in the text.</p>
Full article ">Figure 17
<p>Comparison on the Prague Texture Dataset Generator and Benchmark [<a href="#B70-sensors-20-05432" class="html-bibr">70</a>]. From the first to the fifth column, respectively: original images; ground truths; edges produced by the network in [<a href="#B67-sensors-20-05432" class="html-bibr">67</a>]; edges produced by the network in [<a href="#B69-sensors-20-05432" class="html-bibr">69</a>]; edges produced in the presented method (continuation). Letters (<b>a</b>–<b>e</b>) present different benchmark textures mosaics and are used for referencing the images in the text.</p>
Full article ">Figure 18
<p>Application on remote sensing images. From left to right, respectively: original image; ground-truth; and the results on the proposed method. Letters (<b>a</b>–<b>d</b>) present different mosaic images and are used for referencing the images in the text.</p>
Full article ">Figure 19
<p>(<b>a</b>) An example mosaic, (<b>b</b>) the results of a multi-class labeling approach, (<b>c</b>) a HED architecture, and (<b>d</b>) the proposed method.</p>
Full article ">Figure 20
<p>Application on lymphoma tissue images. From left to right, respectively: original image; ground-truth; and the results on the proposed method. Letters (<b>a</b>–<b>c</b>) present different mosaic images and are used for referencing the images in the text.</p>
Full article ">Figure 21
<p>(<b>a</b>) An example mosaic, (<b>b</b>) the results of a multi-class labeling approach, (<b>c</b>) a HED architecture, and (<b>d</b>) the proposed method.</p>
Full article ">
24 pages, 9661 KiB  
Article
Detection of Direction-Of-Arrival in Time Domain Using Compressive Time Delay Estimation with Single and Multiple Measurements
by Youngmin Choo, Yongsung Park and Woojae Seong
Sensors 2020, 20(18), 5431; https://doi.org/10.3390/s20185431 - 22 Sep 2020
Cited by 7 | Viewed by 2946
Abstract
The compressive time delay estimation (TDE) is combined with delay-and-sum beamforming to obtain direction-of-arrival (DOA) estimates in the time domain. Generally, the matched filter that detects the arrivals at the hydrophone is used with beamforming. However, when the ocean noise smears the arrivals, [...] Read more.
The compressive time delay estimation (TDE) is combined with delay-and-sum beamforming to obtain direction-of-arrival (DOA) estimates in the time domain. Generally, the matched filter that detects the arrivals at the hydrophone is used with beamforming. However, when the ocean noise smears the arrivals, ambiguities appear in the beamforming results, degrading the DOA estimation. In this work, compressive sensing (CS) is applied to accurately evaluate the arrivals by suppressing the noise, which enables the correct detection of arrivals. For this purpose, CS is used in two steps. First, the candidate time delays for the actual arrivals are calculated in the continuous time domain using a grid-free CS. Then, the dominant arrivals constituting the received signal are selected by a conventional CS using the time delays in the discrete time domain. Basically, the compressive TDE is used with a single measurement. To further reduce the noise, common arrivals over multiple measurements, which are obtained using the extended compressive TDE, are exploited. The delay-and-sum beamforming technique using refined arrival estimates provides more pronounced DOAs. The proposed scheme is applied to shallow-water acoustic variability experiment 15 (SAVEX15) measurement data to demonstrate its validity. Full article
(This article belongs to the Special Issue Advanced Passive Radar Techniques and Applications)
Show Figures

Figure 1

Figure 1
<p>Direction-of-arrival (DOA) estimation using the matched filter (MF) with a single measurement. Noise is absent: (<b>a</b>) arrival structures via application of MF to the signals at the vertical line array (VLA) and (<b>b</b>) DOAs using delay-and-sum beamforming with the arrival structures.</p>
Full article ">Figure 2
<p>Direction-of-arrival (DOA) estimation using the matched filter (MF) with a single measurement. The signal-to-noise ratio (SNR) is −9.8 dB: (<b>a</b>) arrival structures via application of the MF to the signals at the vertical line array (VLA) and (<b>b</b>) DOAs using delay-and-sum beamforming with the arrival structures.</p>
Full article ">Figure 3
<p>Direction-of-arrival (DOA) estimation using compressive sensing (CS) with a single measurement. Noise is absent: (<b>a</b>) arrival structures via sequential CS usage with the signals at the vertical line array (VLA) and (<b>b</b>) DOAs using delay-and-sum beamforming with the arrival structures.</p>
Full article ">Figure 4
<p>Direction-of-arrival (DOA) estimation using compressive sensing (CS) with a single measurement. The signal-to-noise ratio (SNR) is −9.8 dB: (<b>a</b>) arrival structures via sequential CS usage with the signals at the vertical line array (VLA) and (<b>b</b>) DOAs using delay-and-sum beamforming with the arrival structures.</p>
Full article ">Figure 5
<p>Direction-of-arrival (DOA) estimation using the matched filter (MF) with multiple measurements. The signal-to-noise ratio (SNR) is −9.8 dB: (<b>a</b>) arrival structures via the application of the MF to the signals at the vertical line array (VLA) and (<b>b</b>) DOAs using delay-and-sum beamforming with the arrival structures.</p>
Full article ">Figure 6
<p>Direction-of-arrival (DOA) estimation using compressive sensing (CS) with multiple measurements. The signal-to-noise ratio (SNR) is −9.8 dB: (<b>a</b>) arrival structures via sequential CS usage with the signals at the vertical line array (VLA), and (<b>b</b>) DOAs using delay-and-sum beamforming with the arrival structures.</p>
Full article ">Figure 7
<p>The process of extracting CIR from multiple measurements using the extended grid-free compressive time delay estimation (TDE) with conventional on-grid compressive TDE: Multiple measurements (frequency responses of received signals at a sensor) are used as input for extended grid-free compressive time delay estimation, which estimates time delays of arrivals in continuous time domain. The off-grid time delays from the previous step and matched filter (MF) results are used as input for the second step (conventional on-grid TDE). It evaluates approximated CIRs composed of dominant arrivals along multiple measurements, and their average provides refined CIR at the sensor.</p>
Full article ">Figure 8
<p>Direction-of-arrival (DOA) estimation using expanded compressive sensing (CS) with multiple measurements. The signal-to-noise ratio (SNR) is −9.8 dB: (<b>a</b>) arrival structures via sequential CS usage with the signals at the vertical line array (VLA). The compressive time delay estimation (TDE) is expanded to find the common time delays of the arrivals over the multiple measurements. (<b>b</b>) DOAs using delay-and-sum beamforming with the arrival structures.</p>
Full article ">Figure 9
<p>Energy ratios from three different schemes: Matched filter (MF, dotted line), repeated usage of compressive sensing (CS) along multiple measurements (CS1, dash line), and expended CS for multiple measurements (CS2, solid line). While the MF result increases with the increment of signal-to-noise ratio (SNR), it is much less than 1, owing to overall residing peaks by noise. Two CS-based schemes show similar performance for the simple environment having a single arrival. However, expended CS for multiple measurements is better at most SNRs (in particular, at low SNRs). Both CS results approach to one since their estimates are more concentrated around true arrival with the increment of SNR.</p>
Full article ">Figure 10
<p>Direction-of-arrival (DOA) estimation using expanded compressive sensing (CS) with multiple measurements when the Doppler effect occurs. The signal-to-noise ratio (SNR) is −9.8 dB: (<b>a</b>) arrival structures via sequential CS usage with the signals at the vertical line array (VLA). The compressive time delay estimation (TDE) using the common time delays of the arrivals over the multiple measurements is applied. (<b>b</b>) DOAs using delay-and-sum beamforming with the arrival structures.</p>
Full article ">Figure 11
<p>Direction-of-arrival (DOA) estimation using the matched filter (MF) with a single snapshot captured by the vertical line array (VLA) in SAVEX15 (0.5–2 kHz): (<b>a</b>) arrival structures via the application of the MF to the signals at the VLA and (<b>b</b>) DOAs using delay-and-sum beamforming with the arrival structures.</p>
Full article ">Figure 12
<p>Direction-of-arrival (DOA) estimation using compressive sensing (CS) with a single snapshot captured by the vertical line array (VLA) in SAVEX15 (0.5–2 kHz): (<b>a</b>) arrival structures via sequential CS usage to the signals at the VLA, and (<b>b</b>) DOAs using delay-and-sum beamforming with the arrival structures.</p>
Full article ">Figure 13
<p>Direction-of-arrival (DOA) estimation using the matched filter (MF) with a single snapshot captured by the vertical line array (VLA) in SAVEX15 (11–31 kHz): (<b>a</b>) arrival structures via the application of the MF to the signals at the VLA and (<b>b</b>) DOAs using delay-and-sum beamforming with the arrival structures.</p>
Full article ">Figure 14
<p>Direction-of-arrival (DOA) estimation using the matched filter (MF) with multiple snapshots captured by the vertical line array (VLA) in SAVEX15 (11–31 kHz): (<b>a</b>) arrival structures via the application of the MF to the signals at the VLA, and (<b>b</b>) DOAs using delay-and-sum beamforming with the arrival structures.</p>
Full article ">Figure 15
<p>Direction-of-arrival (DOA) estimation using compressive sensing (CS) with a single snapshot captured by the vertical line array (VLA) in SAVEX15 (11–31 kHz): (<b>a</b>) arrival structures via sequential CS usage to the signals at the VLA, and (<b>b</b>) DOAs using delay-and-sum beamforming with the arrival structures.</p>
Full article ">Figure 16
<p>Direction-of-arrival (DOA) estimation using compressive sensing (CS) with multiple snapshots captured by the vertical line array (VLA) in SAVEX15 (11–31 kHz): (<b>a</b>) arrival structures via sequential CS usage applied to the signals at the VLA and (<b>b</b>) DOAs using delay-and-sum beamforming with the arrival structures.</p>
Full article ">Figure 17
<p>Direction-of-arrival (DOA) estimation using extended compressive sensing (CS) with multiple snapshots captured by the vertical line array (VLA) in SAVEX15 (11–31 kHz): (<b>a</b>) arrival structures via sequential CS usage applied to the signals at the VLA and (<b>b</b>) DOAs using the delay-and-sum beamforming with the arrival structures.</p>
Full article ">Figure 18
<p>Enlarged parts of the results of delay-and-sum beamforming from three different schemes that were applied to estimate the CIRs at the array using multiple measurements: (<b>a</b>) matched filter (MF), (<b>b</b>) compressive sensing (CS) using the time delay for each single snapshot, and (<b>c</b>) CS using the time delays over multiple snapshots. The ambiguities decrease as the techniques become more sophisticated.</p>
Full article ">
20 pages, 10606 KiB  
Article
A Novel MEMS Gyroscope In-Self Calibration Approach
by Qifan Zhou, Guizhen Yu, Huazhi Li and Na Zhang
Sensors 2020, 20(18), 5430; https://doi.org/10.3390/s20185430 - 22 Sep 2020
Cited by 8 | Viewed by 4575
Abstract
This paper presents a novel approach for hand-held low-cost MEMS (micro-electro-mechanical system) gyroscope in-self calibration. This method does not need the support of external high-precision equipment compared with traditional calibration scheme and can be accomplished by user hand rotation. In this approach, Kalman [...] Read more.
This paper presents a novel approach for hand-held low-cost MEMS (micro-electro-mechanical system) gyroscope in-self calibration. This method does not need the support of external high-precision equipment compared with traditional calibration scheme and can be accomplished by user hand rotation. In this approach, Kalman filter is designed to perform the calibration procedure and estimate gyroscope bias error, scale factor error and non-orthogonal error. The system observability is analyzed and the dynamic rotating conditions under which the sensor errors become observable are derived. The design principles of optimal calibration procedure are provided as well. Both simulated and practical experiments are carried out to test the validation of the proposed calibration algorithm. The achieved results demonstrate that the introduced approach can provide promising calibration scheme for the low-cost MEMS gyroscope. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Simulated Inertial Sensor Measurement.</p>
Full article ">Figure 2
<p>Sensor Error Calibration Result.</p>
Full article ">Figure 3
<p>Elements of P matrix.</p>
Full article ">Figure 4
<p>Collected Inertial Data.</p>
Full article ">Figure 5
<p>Sensor Error Estimation.</p>
Full article ">Figure 6
<p>Elements of P Matrix.</p>
Full article ">Figure 7
<p>Comparison of Attitude Result.</p>
Full article ">Figure 8
<p>Attitude Calculation Result.</p>
Full article ">
12 pages, 5598 KiB  
Letter
A Feature Optimization Approach Based on Inter-Class and Intra-Class Distance for Ship Type Classification
by Chen Li, Ziyuan Liu, Jiawei Ren, Wenchao Wang and Ji Xu
Sensors 2020, 20(18), 5429; https://doi.org/10.3390/s20185429 - 22 Sep 2020
Cited by 30 | Viewed by 3585
Abstract
Deep learning based methods have achieved state-of-the-art results on the task of ship type classification. However, most existing ship type classification algorithms take time–frequency (TF) features as input, the underlying discriminative information of these features has not been explored thoroughly. This paper proposes [...] Read more.
Deep learning based methods have achieved state-of-the-art results on the task of ship type classification. However, most existing ship type classification algorithms take time–frequency (TF) features as input, the underlying discriminative information of these features has not been explored thoroughly. This paper proposes a novel feature optimization method which is designed to minimize an objective function aimed at increasing inter-class and reducing intra-class feature distance for ship type classification. The objective function we design is able to learn a center for each class and make samples from the same class closer to the corresponding center. This ensures that the features maximize underlying discriminative information involved in the data, particularly for some targets that usually confused by the conventional manual designed feature. Results on the dataset from a real environment show that the proposed feature optimization approach outperforms traditional TF features. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>The architecture of Deep Neural Network (DNN) for ship classification.</p>
Full article ">Figure 2
<p>The architecture of the proposed feature optimization.</p>
Full article ">Figure 3
<p>An overview of architecture for joint training. We adopt multilayer DNN as the basic component for achieving representations of ship types, and the proposed optimization loss is used as the supervision loss. In addition, softmax loss could be also combined into the framework for boosting performance. The blue arrow and yellow arrow present forward propagation and backpropagation respectively.</p>
Full article ">Figure 4
<p>Spectrogram of hydrophone signal for parts of ships. (<b>a</b>) background noise; (<b>b</b>) motorboats; (<b>c</b>) passenger ferries; (<b>d</b>) dredger; (<b>e</b>) mussel boats; (<b>f</b>) ro-ro vessels.</p>
Full article ">Figure 5
<p>The retrieval performances achieved by varying <math display="inline"><semantics> <mi>α</mi> </semantics></math>.</p>
Full article ">Figure 6
<p>The classification loss curves and feature optimization loss curves with iterations.</p>
Full article ">Figure 7
<p>Visualization of Fbank and optimized feature.</p>
Full article ">
24 pages, 8129 KiB  
Article
Monitoring the Structural Health of Glass Fibre-Reinforced Hybrid Laminates Using Novel Piezoceramic Film
by René Schmidt, Alexander Graf, Ricardo Decker, Michael Heinrich, Verena Kräusel, Lothar Kroll and Wolfram Hardt
Sensors 2020, 20(18), 5428; https://doi.org/10.3390/s20185428 - 22 Sep 2020
Cited by 4 | Viewed by 2873
Abstract
This work investigates a new generation structural health monitoring (SHM) system for fibre metal laminates (FML) based on an embedded thermoplastic film with compounded piezoceramics, termed piezo-active fibre metal laminate (PFML). The PFML is manufactured using near-series processes and its potential as a [...] Read more.
This work investigates a new generation structural health monitoring (SHM) system for fibre metal laminates (FML) based on an embedded thermoplastic film with compounded piezoceramics, termed piezo-active fibre metal laminate (PFML). The PFML is manufactured using near-series processes and its potential as a passive SHM system is being investigated. A commercial Polyvinylidene fluoride (PVDF) sensor film is used for comparative evaluation of the sensor signals. Furthermore, thermoset and thermoplastic-based FML are equipped with the sensor films and evaluated. For this purpose, static and dynamic three-point bending tests are carried out and the data are recorded. The data obtained from the sensors and the testing machine are compared with the type and time of damage by means of intelligent signal processing. By using a smart sensor system, further investigations are planned which the differentiation between various failure modes, e.g., delamination or fibre breakage. Full article
Show Figures

Figure 1

Figure 1
<p>Prepared specimen (<b>a</b>) drawing; (<b>b</b>) front and back view.</p>
Full article ">Figure 2
<p>Set-up of the static and dynamic three-point bending test.</p>
Full article ">Figure 3
<p>Predefined sequence for the dynamic test in the testing machine.</p>
Full article ">Figure 4
<p>Characteristic of the standard deviation of the SED found in the preliminary study in comparison to the normalized sensor signal.</p>
Full article ">Figure 5
<p>Example of peaks forming a characteristic trend.</p>
Full article ">Figure 6
<p>Comparison of the force–displacement curves from the static three-point bending test.</p>
Full article ">Figure 7
<p>Failure of the thermoplastic laminate after the static three-point bending test.</p>
Full article ">Figure 8
<p>Failure of the thermoset laminate after the static three-point bending test.</p>
Full article ">Figure 9
<p>Comparison of the maximum values: force and cycles from the dynamic three-point bending test.</p>
Full article ">Figure 10
<p>Results of DMG index analysis for one example of PFML-PZT-EP and PFML-PVDF-EP in the static test scenario.</p>
Full article ">Figure 11
<p>Results of DMG index analysis for one example of PFML-PZT-EP and PFML-PVDF-EP in the dynamic test scenario.</p>
Full article ">
18 pages, 9660 KiB  
Article
Convolutional Neural Network Architecture for Recovering Watermark Synchronization
by Wook-Hyung Kim, Jihyeon Kang, Seung-Min Mun and Jong-Uk Hou
Sensors 2020, 20(18), 5427; https://doi.org/10.3390/s20185427 - 22 Sep 2020
Cited by 8 | Viewed by 2795
Abstract
In this paper, we propose a convolutional neural network-based template architecture that compensates for the disadvantages of existing watermarking techniques that are vulnerable to geometric distortion. The proposed template consists of a template generation network, a template extraction network, and a template matching [...] Read more.
In this paper, we propose a convolutional neural network-based template architecture that compensates for the disadvantages of existing watermarking techniques that are vulnerable to geometric distortion. The proposed template consists of a template generation network, a template extraction network, and a template matching network. The template generation network generates a template in the form of noise and the template is inserted into certain pre-defined spatial locations of the image. The extraction network detects spatial locations where the template is inserted in the image. Finally, the template matching network estimates the parameters of the geometric distortion by comparing the shape of spatial locations where the template was inserted with the locations where the template was detected. It is possible to recover an image in its original geometrical form using the estimated parameters, and as a result, watermarks applied using existing watermarking techniques that are vulnerable to geometric distortion can be decoded normally. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>Illegal distribution scenario for real-time web-comics and watermark extraction process suitable for this scenario. Illegal distribution is frequently caused by screen capturing on Internet browsers and smartphone applications. Scaling and translation occur in this process and these distortions must be corrected before watermark decoding.</p>
Full article ">Figure 2
<p>An overview of the proposed method. (<b>a</b>) insertion step, and (<b>b</b>) decoding step. A Stego image is an image in which both a watermark and a template are inserted.</p>
Full article ">Figure 3
<p>Template insertion and extraction overview. Networks are trained so that the extracted template <math display="inline"><semantics> <msubsup> <mi>K</mi> <mn>64</mn> <mo>′</mo> </msubsup> </semantics></math> and the inserted template <math display="inline"><semantics> <msub> <mi>K</mi> <mn>64</mn> </msub> </semantics></math> have the same shape.</p>
Full article ">Figure 4
<p>Proposed template generation and extraction network architectures. All kernel sizes of convolutional layers are set to (3, 3), <span class="html-italic">d</span> and <span class="html-italic">s</span> denote the depth and stride, respectively.</p>
Full article ">Figure 5
<p>Template matching network architecture. Two feature extraction networks share weights with each other. <span class="html-italic">k</span> means kernel size, and zero-padding is not used in matching networks.</p>
Full article ">Figure 6
<p>Frequency spectrum coverage of curvelet transform. The curvelet decomposes the frequency domain into various scales and directions.</p>
Full article ">Figure 7
<p>The process of inserting a template and watermark into images of various sizes.</p>
Full article ">Figure 8
<p>(<b>a</b>–<b>c</b>) Original image, (<b>d</b>–<b>f</b>) Proposed template and watermark-embedded image.</p>
Full article ">Figure 9
<p>Robustness results for geometric distortions. (<b>a</b>) rotation, (<b>b</b>) scaling, (<b>c</b>) translation. The method with the proposed template recovery and the method with ground-truth recovery show almost the same performance.</p>
Full article ">Figure 10
<p>Average BER for RST attack with DIBR rendering. (<b>a</b>) rotation, (<b>b</b>) scaling, (<b>c</b>) translation.</p>
Full article ">
38 pages, 130648 KiB  
Article
Toward Mass Video Data Analysis: Interactive and Immersive 4D Scene Reconstruction
by Matthias Kraus, Thomas Pollok, Matthias Miller, Timon Kilian, Tobias Moritz, Daniel Schweitzer, Jürgen Beyerer, Daniel Keim, Chengchao Qu and Wolfgang Jentner
Sensors 2020, 20(18), 5426; https://doi.org/10.3390/s20185426 - 22 Sep 2020
Cited by 6 | Viewed by 4157
Abstract
The technical progress in the last decades makes photo and video recording devices omnipresent. This change has a significant impact, among others, on police work. It is no longer unusual that a myriad of digital data accumulates after a criminal act, which must [...] Read more.
The technical progress in the last decades makes photo and video recording devices omnipresent. This change has a significant impact, among others, on police work. It is no longer unusual that a myriad of digital data accumulates after a criminal act, which must be reviewed by criminal investigators to collect evidence or solve the crime. This paper presents the VICTORIA Interactive 4D Scene Reconstruction and Analysis Framework (“ISRA-4D” 1.0), an approach for the visual consolidation of heterogeneous video and image data in a 3D reconstruction of the corresponding environment. First, by reconstructing the environment in which the materials were created, a shared spatial context of all available materials is established. Second, all footage is spatially and temporally registered within this 3D reconstruction. Third, a visualization of the hereby created 4D reconstruction (3D scene + time) is provided, which can be analyzed interactively. Additional information on video and image content is also extracted and displayed and can be analyzed with supporting visualizations. The presented approach facilitates the process of filtering, annotating, analyzing, and getting an overview of large amounts of multimedia material. The framework is evaluated using four case studies which demonstrate its broad applicability. Furthermore, the framework allows the user to immerse themselves in the analysis by entering the scenario in virtual reality. This feature is qualitatively evaluated by means of interviews of criminal investigators and outlines potential benefits such as improved spatial understanding and the initiation of new fields of application. Full article
Show Figures

Figure 1

Figure 1
<p>Processing pipeline of the crime scene analysis framework. Multimedia input data are processed in three main steps: First, a static reconstruction of the crime scene is created using a structure-from-motion approach. Second, dynamic elements are extracted as dynamic point clouds. Third, tracks of persons and objects are extracted using machine learning models.</p>
Full article ">Figure 2
<p>3D reconstruction after being manually geo-registered into satellite imagery based map data.</p>
Full article ">Figure 3
<p>(<b>Left</b>) Point cloud reconstructed from a stereo camera using classical stereo block matching. (<b>Right</b>) Point cloud reconstructed with our geometrically based monocular depth reconstruction.</p>
Full article ">Figure 4
<p>(<b>Top left</b>) Input image. (<b>Top right</b>) Result of our method, in which people are segmented and placed upright on the ground.(<b>Bottom left</b>) Resulting depth map using Monodepth2. (<b>Bottom right</b>) Embedded point cloud generated using Monodepth2.</p>
Full article ">Figure 5
<p>Dynamic objects can be displayed differently in the static 3D reconstruction. (<b>Top</b>) Detected bounding boxes of persons are embedded upright. (<b>Center</b>) Complete depth map of the segmented image is superimposed. (<b>Bottom</b>) People reconstructed with PIFuHD are embedded.</p>
Full article ">Figure 6
<p>(<b>Left</b>) OpenPose annotation key points. The red silhouette represents the segmented instance boundary when using MaskRCNN. (<b>Top right</b>) Exemplary OpenPose result on an image with several persons and partial occlusions. (<b>Bottom right</b>) Neural network-based automatic foreground segmentation of people. This foreground is the dynamic part of the image that has to be placed in the scene as dynamic content.</p>
Full article ">Figure 7
<p>3D models of different persons reconstructed from single images using PiFuHD. The reconstruction time was approximately 10 seconds per person. The size of the image patches varied between 260 × 330 and 440 × 960 pixels. Most models were successfully reconstructed from all sides. Only the kneeing man opening a suitcase (lowest resolution) could not be reconstructed from the back.</p>
Full article ">Figure 8
<p>Frame taken from feature detection preprocessing procedure. During processing, the original video is played back while detected objects are highlighted.</p>
Full article ">Figure 9
<p>For each person recognized in a video frame (<b>Left</b>), OpenPose is applied for skeleton extraction. The extracted skeletons can later be displayed in the scene as connected points (<b>right</b>).</p>
Full article ">Figure 10
<p>Its extrinsic parameters define the world coordinates of a camera in a 3D scene (camera icon). Based on intrinsic parameters, the pixel coordinate position of an object can be transformed into its respective world position through raycasting. A ray (red line) is emitted through the image at the lower edge of the bounding box of a detection (red rectangle in the camera frame). The intersection of the ray with the mesh provides the related 3D world coordinate.</p>
Full article ">Figure 11
<p>Main elements of the analysis application.</p>
Full article ">Figure 12
<p>Multiple data sources are bundled and displayed simultaneously in a shared context. On the left side, three frames from static surveillance cameras are displayed. Their locations are indicated by small camera icons in the 3D scene (orange, blue, and red). Detections from all cameras are displayed simultaneously in the scene (dashed lines) as well as static material, such as photos (light green) and panoramic images (teal).</p>
Full article ">Figure 13
<p>The graphical user interface of the presented demonstrator consists of four main parts: a menu at the top, a minimap at the top right, a bottom panel, and the main window as a view of the inspected scene.</p>
Full article ">Figure 14
<p>Minimap depicting a top-down view of the reconstructed environment. The locations of the cameras recording the investigated incident are displayed as small camera icons (3 static cameras: blue, green, and magenta; 2 moving cameras: red and yellow). The current location of the user is shown as a small dot, with a red frustum indicating the viewing direction (center) and field of view.</p>
Full article ">Figure 15
<p>3D scene that can be inspected by flying around in it, which interactively changes the perspective.</p>
Full article ">Figure 16
<p>Panoramas (<b>Left</b>) are displayed as spheres in the scene (<b>center</b>). By opening a sphere, the user “enters” the photosphere to inspect it (<b>right</b>).</p>
Full article ">Figure 17
<p>Static user annotations can be manually added to the scene.</p>
Full article ">Figure 18
<p>The camera frustums displayed in the scene and minimap can be customized: either as semi-transparent objects (<b>Left</b>) or using additive (<b>top right</b>) or subtractive (<b>bottom right</b>) lighting.</p>
Full article ">Figure 19
<p>The user can configure the display of moving cameras in the scene. The location of the camera at the currently selected time is highlighted with a red halo. In this example, the camera locations of the last four time steps are also shown with increasing opacity.</p>
Full article ">Figure 20
<p>(<b>a</b>) A detection can be displayed as a bounding box, (<b>b</b>) the best shot of its track, (<b>c</b>) the corresponding snippet from its frame, (<b>d</b>) a combination of bounding box and best shot or frame snippet, or, if available, (<b>e</b>) its skeleton.</p>
Full article ">Figure 21
<p>The trajectory of a selected detection is visualized as a directed path within the scene. A menu allows to change the displayed title of a detection and to leave notes.</p>
Full article ">Figure 22
<p>Faces of displayed persons within the 4D reconstruction can be anonymized for privacy reasons. A face detection algorithm detects the bounding boxes of faces (<b>center</b>) which are subsequently blurred in the displayed content throughout the visual analysis (<b>right</b>).</p>
Full article ">Figure 23
<p>Dynamic point clouds displayed in the static scene from the current perspective (<b>Left</b>). If one navigates through space, the perspective changes and point clouds generated from different cameras can be perceived. For example, (1) the (<b>top right</b>) point cloud snippet can be seen from the direction indicated by (1) the orange camera and (2) the (<b>bottom right</b>) one from the direction indicated by the blue camera.</p>
Full article ">Figure 24
<p>To animate annotations, waypoints can be set and arranged on a timeline that temporarily replaces the bottom panel. Waypoints determine the location of an annotation at a particular time.</p>
Full article ">Figure 25
<p>The bottom panel consists of three elements: At the top is a frame preview of all selected cameras. In the center, the class distributions of the detections are visualized as horizon charts. At the bottom is a chart depicting the appearances of all detections as lines.</p>
Full article ">Figure 26
<p>A heatmap visualization of selected detections can be projected onto the environment providing an overview of where objects or persons were detected in the analyzed scene.</p>
Full article ">Figure 27
<p>Interactive tool for measuring distances and object sizes in the reconstruction.</p>
Full article ">Figure 28
<p>(<b>Left</b>) View of an exemplary scene in VR. (<b>Right</b>) Set-up with immersed investigator.</p>
Full article ">Figure 29
<p>(<b>Left</b>) A radial menu can be opened on the right controller to open various menus that are displayed on the left controller to configure the visualized scene. (<b>Right</b>) A minimap and a distance measuring tool can be activated on demand.</p>
Full article ">Figure 30
<p>Collaborative setup with multiple monitors connected to the system. One monitor shows the usual view of the 4D reconstruction (<b>Left</b>) and the other one, a view from the simultaneous observer’s perspective in VR (<b>right</b>). An avatar of the VR observer is displayed in the desktop interface (<b>Left</b>).</p>
Full article ">Figure 31
<p>Reconstruction of airport in which multiple surveillance cameras are spatially registered. Video streams of the cameras are fed into the system and automatically extracted detections are depicted in the 3D reconstruction in real-time.</p>
Full article ">Figure 32
<p>Exemplary reconstruction of the environment for strategy planning in police operations. The demonstrator creates a static mesh from drone recordings. The planned movement of police forces can be sketched in it.</p>
Full article ">
12 pages, 1275 KiB  
Letter
Vibration Analysis of Post-Buckled Thin Film on Compliant Substrates
by Xuanqing Fan, Yi Wang, Yuhang Li and Haoran Fu
Sensors 2020, 20(18), 5425; https://doi.org/10.3390/s20185425 - 22 Sep 2020
Cited by 5 | Viewed by 3675
Abstract
Buckling stability of thin films on compliant substrates is universal and essential in stretchable electronics. The dynamic behaviors of this special system are unavoidable when the stretchable electronics are in real applications. In this paper, an analytical model is established to investigate the [...] Read more.
Buckling stability of thin films on compliant substrates is universal and essential in stretchable electronics. The dynamic behaviors of this special system are unavoidable when the stretchable electronics are in real applications. In this paper, an analytical model is established to investigate the vibration of post-buckled thin films on a compliant substrate by accounting for the substrate as an elastic foundation. The analytical predictions of natural frequencies and vibration modes of the system are systematically investigated. The results may serve as guidance for the dynamic design of the thin film on compliant substrates to avoid resonance in the noise environment. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Schematic of thin film on a compliant substrate and (<b>b</b>) the deformation of the analytical model in the first-order bucking mode with an elastic foundation.</p>
Full article ">Figure 2
<p>The deflection of first-order buckling mode in the film with different substrate stiffness or without substrate when <span class="html-italic">P</span> = 10 × π<sup>2</sup>.</p>
Full article ">Figure 3
<p>The first-order critical buckling force versus substrate stiffness comparison between theory and simulation when <span class="html-italic">P</span> = 10<sup>6</sup>. The first-order buckling mode with (<b>A</b>) <span class="html-italic">k =</span> 0~877, (<b>B</b>) <span class="html-italic">k =</span> 877~6225, (<b>C</b>) <span class="html-italic">k =</span> 6225~21,750, and (<b>D</b>) when <span class="html-italic">k =</span> 5.5 × 10<sup>7</sup>.</p>
Full article ">Figure 4
<p>(<b>a</b>) The first order and (<b>b</b>) the second order vibration modes of the first-order buckling with different substrate stiffness or without substrate when <span class="html-italic">P</span> = 10 × π<sup>2</sup>.</p>
Full article ">Figure 5
<p>The first order (blue line) and the second order (red line) natural frequencies in the first two orders buckling modes with different substrate stiffness when <span class="html-italic">P</span> = 10<sup>6</sup>. (<b>A</b>,<b>B</b>) are the first order (blue line) and the second order (red line) vibration modes in the first order buckling mode (dot line). (<b>C</b>,<b>D</b>) are the first order (blue line) and the second order (red line) vibration mode in the second order buckling mode (dot line).</p>
Full article ">Figure 6
<p>(<b>a</b>) The first order buckling mode and (<b>b</b>) the first order (blue line) and the second order (red line) vibration modes in the first buckling mode with substrate stiffness <span class="html-italic">k</span> = 3.38 × 10<sup>6</sup> when <span class="html-italic">P</span> = 5000.</p>
Full article ">
12 pages, 3252 KiB  
Letter
Field Demonstration of a Distributed Microsensor Network for Chemical Detection
by Jeffrey S. Erickson, Brandy J. Johnson and Anthony P. Malanoski
Sensors 2020, 20(18), 5424; https://doi.org/10.3390/s20185424 - 22 Sep 2020
Cited by 1 | Viewed by 2425
Abstract
We have developed the ABEAM-15, a custom-built multiplexed reflectance device for the detection of vapor phase and aerosolized chemical plumes. The instrument incorporates fifteen individual sensing elements, has wireless communications, offers support for a battery pack, and is capable of both live and [...] Read more.
We have developed the ABEAM-15, a custom-built multiplexed reflectance device for the detection of vapor phase and aerosolized chemical plumes. The instrument incorporates fifteen individual sensing elements, has wireless communications, offers support for a battery pack, and is capable of both live and fully autonomous operation. Two housing options have been fabricated: a compact open housing for indoor use and a larger weather-sealed housing for outdoor use. Previously developed six-plex analysis algorithms are extended to 15-plex format and implemented on a laptop computer. We report the results of recent outdoor field trials with this instrument in Denver, CO in a stadium security scenario. Through software, the wireless modules on each instrument were configured to form a six-instrument, star-point topology, distributed microsensor network with live reporting and real-time data analysis. The network was tested with aerosols of methyl salicylate. Full article
(This article belongs to the Special Issue Distributed and Pervasive Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Images of the ABEAM-6 prototype sensor. The device includes six color-sensing breakout boards, a custom control board, fans, and wind tunnel indicator support in a custom housing. This device was used with external power and was controlled by a laptop computer. (<b>a</b>) The ABEAM-6 in an outdoor environment. (<b>b</b>) The partially disassembled ABEAM-6. Additional detail provided in the <a href="#app1-sensors-20-05424" class="html-app">Supplementary Information, Figures S1 and S2</a>.</p>
Full article ">Figure 2
<p>Images of the ABEAM-15 prototype sensor. The device includes fifteen surface-mounted RGB sensors, eight cool white LEDs, and three printed circuit boards. This device can be utilized autonomously or with a wireless tether to a laptop for real time detection. (<b>a</b>) The ABEAM-15 in an outdoor environment. (<b>b</b>) The ABEAM-15 board stack with indicator support. (<b>c</b>) The ABEAM-15 with sun cover removed. Additional detail provided in the <a href="#app1-sensors-20-05424" class="html-app">Supplementary Information, Figures S3 and S4</a>.</p>
Full article ">Figure 3
<p>Image of the graphical user interface (GUI) for the ABEAM-15. This image shows single indicators under event detection conditions for Sensor 1 and Sensor 6, a condition insufficient for triggering positive status under the three indicator requirements being used. Sensor 5 is in positive status with nine indicators under event detection conditions. Here, a 5 s sampling cycle is used with 400 ms integration.</p>
Full article ">Figure 4
<p>Distribution of devices during outdoor experiments.</p>
Full article ">Figure 5
<p>The coupon layout used for the trials described here. This coupon includes three copies of each of four indicators as well as three negative control spots created using red Sharpie<sup>®®</sup> ink. Index markings at the edges of the coupon guide the orientation on the sensor device. (<b>a</b>) Photograph of a coupon. (<b>b</b>) Identification of the indicator locations used in the coupons.</p>
Full article ">Figure 6
<p>Normalized RGB data from the sensor located at position F (<a href="#sensors-20-05424-f004" class="html-fig">Figure 4</a>). Dashed lines indicate exposure events; the gray shaded region is prior to installation of the sunshade. All three exposures resulted in positive responses from the device. Data are normalized to the average value of the color intensity over the duration of the experiment. Here, the color of the curve (red, blue, green) corresponds to the color value measured.</p>
Full article ">Figure 7
<p>Indicator spot maps for the sensor located at position F (<a href="#sensors-20-05424-f004" class="html-fig">Figure 4</a>) during exposure to ethanol (<b>a</b>) and methyl salicylate (<b>b</b>,<b>c</b>). Red circles are used to identify indicators that gave a positive response.</p>
Full article ">
23 pages, 5616 KiB  
Article
MODIS Sensor Capability to Burned Area Mapping—Assessment of Performance and Improvements Provided by the Latest Standard Products in Boreal Regions
by José A. Moreno-Ruiz, José R. García-Lázaro, Manuel Arbelo and Manuel Cantón-Garbín
Sensors 2020, 20(18), 5423; https://doi.org/10.3390/s20185423 - 22 Sep 2020
Cited by 10 | Viewed by 3114
Abstract
This paper presents an accuracy assessment of the main global scale Burned Area (BA) products, derived from daily images of the Moderate-Resolution Imaging Spectroradiometer (MODIS) Fire_CCI 5.1 and MCD64A1 C6, as well as the previous versions of both products (Fire_CCI 4.1 and MCD45A1 [...] Read more.
This paper presents an accuracy assessment of the main global scale Burned Area (BA) products, derived from daily images of the Moderate-Resolution Imaging Spectroradiometer (MODIS) Fire_CCI 5.1 and MCD64A1 C6, as well as the previous versions of both products (Fire_CCI 4.1 and MCD45A1 C5). The exercise was conducted on the boreal region of Alaska during the period 2000–2017. All the BA polygons registered by the Alaska Fire Service were used as reference data. Both new versions doubled the annual BA estimate compared to the previous versions (66% for Fire_CCI 5.1 versus 35% for v4.1, and 63% for MCD64A1 C6 versus 28% for C5), reducing the omission error (OE) by almost one half (39% versus 67% for Fire_CCI and 48% versus 74% for MCD) and slightly increasing the commission error (CE) (7.5% versus 7% for Fire_CCI and 18% versus 7% for MCD). The Fire_CCI 5.1 product (CE = 7.5%, OE = 39%) presented the best results in terms of positional accuracy with respect to MCD64A1 C6 (CE = 18%, OE = 48%). These results suggest that Fire_CCI 5.1 could be suitable for those users who employ BA standard products in geoinformatics analysis techniques for wildfire management, especially in Boreal regions. The Pareto boundary analysis, performed on an annual basis, showed that there is still a potential theoretical capacity to improve the MODIS sensor-based BA algorithms. Full article
(This article belongs to the Special Issue Remote Sensing and Geoinformatics in Wildfire Management)
Show Figures

Figure 1

Figure 1
<p>The study region includes the entire boreal forest of Alaska (70° N–60° N, 168.5° W–141° W).</p>
Full article ">Figure 2
<p>Temporal distribution of the annual burned area in Alaska during the period 2000–2017 and distribution per year of the number of fires by size (in thousands of hectares), according the Alaska Fire Service (AFS, Fort Wainwright, AK, USA). Five categories were considered for the sizes (BA extension in ha) of the fires: very small (&lt;100 ha), small (≥100 ha and &lt;1000 ha), medium (≥1000 ha and &lt;10,000 ha), large (≥10,000 ha and &lt;100,000 ha) and very large (≥100,000 ha).</p>
Full article ">Figure 3
<p>Percentage of the total number of recorded fires by the Alaska Fire Service (AFS) and total area burned according to the five categories considered for the sizes of the fires in <a href="#sensors-20-05423-f002" class="html-fig">Figure 2</a> for the 18 years of study (2000–2017).</p>
Full article ">Figure 4
<p>Annual burned area maps for the Alaskan region in 2009. Red, burned pixels; white, study region; green, political borders and coastlines.</p>
Full article ">Figure 5
<p>Flowchart followed for the accuracy assessment of standard BA products.</p>
Full article ">Figure 6
<p>Pareto boundary for a binary classification (Burned/Non-Burned). <span class="html-italic">p</span> represents the minimum percentage of burned area to classify a mixed pixel as Burned.</p>
Full article ">Figure 7
<p>Scatterplots of the annual burned area percentages for the four products analyzed against the annual reference burned area (left) and versus the percentage of burned area of ≥10.000 ha fires (right).</p>
Full article ">Figure 8
<p>Temporal distribution of the annual burned area estimation for the Fire_CCI 4.1, Fire_CCI 5.1, MCD45A1 C5.1 and MCD64A1 C6 products along with the AFS reference data.</p>
Full article ">Figure 9
<p>Scatterplots of annual omission errors versus annual commission errors for each of the four burned area products analyzed.</p>
Full article ">Figure 10
<p>Scatterplot for the annual commission and omission errors versus the annual reference burned area.</p>
Full article ">Figure 11
<p>Pareto boundaries at 250 and 500 m and annual CE and OE for the MCD64A1 and Fire_CCI 5.1 products for 2004, 2006, 2008 and 2015.</p>
Full article ">Figure 12
<p>Scatterplots of the total annual TE errors (weighted sum of the annual commission and omission errors) versus the area enclosed by the annual Pareto boundary. Data for the years 2000 and 2001 were not included, as there were incomplete data from the MODIS sensor.</p>
Full article ">
15 pages, 3270 KiB  
Letter
Timestamp Estimation in P802.15.4z Amendment
by Ioan Domuta, Tudor Petru Palade, Emanuel Puschita and Andra Pastrav
Sensors 2020, 20(18), 5422; https://doi.org/10.3390/s20185422 - 22 Sep 2020
Cited by 7 | Viewed by 3636
Abstract
Due to the known issue that the ranging in the 802.15.4™-2015 standard is prone to external attacks, the enhanced impulse radio (EiR), a new amendment still under development, advances the secure ranging protocol by encryption of physical layer (PHY) timestamp sequence using the [...] Read more.
Due to the known issue that the ranging in the 802.15.4™-2015 standard is prone to external attacks, the enhanced impulse radio (EiR), a new amendment still under development, advances the secure ranging protocol by encryption of physical layer (PHY) timestamp sequence using the AES-128 encryption algorithm. This new amendment brings many changes and enhancements which affect the impulse-radio ultra-wideband (IR-UWB) ranging procedures. The timestamp detection is the base factor in the accuracy of range estimation and inherently in the localization precision. This paper analyses the key parts of PHY which have a great contribution in timestamp estimation precision, particularly: UWB pulse, channel sounding and timestamp estimation using ciphered sequence and frequency selective fading. Unlike EiR, where the UWB pulse is defined in the time domain, in this article, the UWB pulse is synthesized from the power spectral density mask, and it is shown that the use of the entire allocated spectrum results in a decrease in risetime, an increase in pulse amplitude, and an attenuation of lateral lobes. The paper proposes a random spreading of the scrambled timestamp sequence (STS), resulting in an improvement in timestamp estimation by the attenuation lateral lobes of the correlation. The timestamp estimation in the noisy channels with non-line-of-sight and multipath propagation is achieved by cross-correlation of the received STS with the locally generated replica of STS. The propagation in the UWB channel with frequency selective fading results in small errors in the timestamp detection. Full article
(This article belongs to the Special Issue Antennas and Propagation)
Show Figures

Figure 1

Figure 1
<p>UWB impulse synthesis: (<b>a</b>) Spectral masks; (<b>b</b>) The pulses in time domain.</p>
Full article ">Figure 2
<p>UWB impulse synthesis from complete spectral mask: (<b>a</b>) Spectral mask from multiple raised cosine profiles; (<b>b</b>) Time domain mask and synthesized pulses.</p>
Full article ">Figure 3
<p>PHY frame format in the 802.15.4z amendment.</p>
Full article ">Figure 4
<p>Timestamp estimation by STS: (<b>a</b>) UWB pulse position in STS sequence; (<b>b</b>) cross-correlation of STS sequences with reference signal.</p>
Full article ">Figure 5
<p>CIR estimation for channel with multipath propagation.</p>
Full article ">Figure 6
<p>Timestamp estimation in channel with NLOS and multipath propagation.</p>
Full article ">Figure 7
<p>Timestamp estimation in noisy channel with multipath propagation.</p>
Full article ">Figure 8
<p>Psd mask and channel response.</p>
Full article ">Figure 9
<p>Timestamp estimation in channel with selective fading.</p>
Full article ">Figure 10
<p>Spectral leakage in timstamp estimation: (<b>a</b>) Nonuniformity in CIR estimation; (<b>b</b>) Erroneous estimation of timestamp.</p>
Full article ">
27 pages, 5971 KiB  
Article
The Millimeter-Wave Radar SLAM Assisted by the RCS Feature of the Target and IMU
by Yang Li, Yutong Liu, Yanping Wang, Yun Lin and Wenjie Shen
Sensors 2020, 20(18), 5421; https://doi.org/10.3390/s20185421 - 22 Sep 2020
Cited by 29 | Viewed by 8138
Abstract
Compared with the commonly used lidar and visual sensors, the millimeter-wave radar has all-day and all-weather performance advantages and more stable performance in the face of different scenarios. However, using the millimeter-wave radar as the Simultaneous Localization and Mapping (SLAM) sensor is also [...] Read more.
Compared with the commonly used lidar and visual sensors, the millimeter-wave radar has all-day and all-weather performance advantages and more stable performance in the face of different scenarios. However, using the millimeter-wave radar as the Simultaneous Localization and Mapping (SLAM) sensor is also associated with other problems, such as small data volume, more outliers, and low precision, which reduce the accuracy of SLAM localization and mapping. This paper proposes a millimeter-wave radar SLAM assisted by the Radar Cross Section (RCS) feature of the target and Inertial Measurement Unit (IMU). Using IMU to combine continuous radar scanning point clouds into “Multi-scan,” the problem of small data volume is solved. The Density-based Spatial Clustering of Applications with Noise (DBSCAN) clustering algorithm is used to filter outliers from radar data. In the clustering, the RCS feature of the target is considered, and the Mahalanobis distance is used to measure the similarity of the radar data. At the same time, in order to alleviate the problem of the lower accuracy of SLAM positioning due to the low precision of millimeter-wave radar data, an improved Correlative Scan Matching (CSM) method is proposed in this paper, which matches the radar point cloud with the local submap of the global grid map. It is a “Scan to Map” point cloud matching method, which achieves the tight coupling of localization and mapping. In this paper, three groups of actual data are collected to verify the proposed method in part and in general. Based on the comparison of the experimental results, it is proved that the proposed millimeter-wave radar SLAM assisted by the RCS feature of the target and IMU has better accuracy and robustness in the face of different scenarios. Full article
(This article belongs to the Special Issue On-Board and Remote Sensors in Intelligent Vehicles)
Show Figures

Figure 1

Figure 1
<p>Composition of millimeter-wave radar Simultaneous Localization and Mapping (SLAM) assisted by the Radar Cross Section (RCS) features of the target and Inertial measurement unit (IMU).</p>
Full article ">Figure 2
<p>Distribution of target position measurement error in different radar detection ranges. The blue area represents the short-distance detection range, the red area represents the long-distance detection range, and the white circle represents the position measurement error of the target here. The larger the radius, the greater the error. The green area is the selected ROI. The target within the ROI has high position precision.</p>
Full article ">Figure 3
<p>Point cloud of radar. (<b>a</b>) Single frame scanning point cloud of radar; (<b>b</b>) “Multi-scan” with <span class="html-italic">N</span> = 20.</p>
Full article ">Figure 4
<p>The above figure visually shows these concepts in DBSCAN: When <span class="html-italic">MinPts</span> = 3, the dotted circle is the e neighborhood, <span class="html-italic">X</span><sub>1</sub> is the core object, <span class="html-italic">X</span><sub>2</sub> is directly connected by <span class="html-italic">X</span><sub>1</sub> density, <span class="html-italic">X</span><sub>3</sub> is reachable by <span class="html-italic">X</span><sub>1</sub> density, and <span class="html-italic">X</span><sub>3</sub> is connected with <span class="html-italic">X</span><sub>4</sub> density.</p>
Full article ">Figure 5
<p>(<b>a</b>) Distance diagram with different <span class="html-italic">K</span> values; (<b>b</b>) Distance diagram with <span class="html-italic">K</span> = 4, where the circle is the position of “elbow”.</p>
Full article ">Figure 6
<p>The original CSM matching diagram. In the reference scan, the number of blocks represents the occupation probability of the target. The original CSM will obtain two kinds of matching relations. The blue arrow indicates the first matching relationship, and the red arrow indicates the second matching relationship.</p>
Full article ">Figure 7
<p>Scanning point cloud of two adjacent frames. The color and number of the square indicate that the target has different RCS. In the end, the only correct matching relationship will be obtained, as shown by the blue arrow.</p>
Full article ">Figure 8
<p>Two rasterized lookup tables. The black squares are radars, and the red squares represent the targets observed by the radar. (<b>a</b>) Rasterized lookup table based on occupancy probability. The number in the grid represents the probability of this grid being occupied. (<b>b</b>) The lookup table was based on the echo power, and the number in the grid represents the echo power of this grid.</p>
Full article ">Figure 9
<p>Scanning point cloud of two adjacent frames. In the reference scan, the color and number of the square represent the target with different occupation probabilities. In the end, the only correct matching relationship is obtained, as shown by the blue arrow.</p>
Full article ">Figure 10
<p>Three experimental scenes obtained by optical cameras (<b>a</b>) scene 1; (<b>b</b>) scene 2; (<b>c</b>) scene 3.</p>
Full article ">Figure 11
<p>The three groups of pictures (<b>a</b>–<b>c</b>) correspond to three scenes. (<b>a1</b>–<b>c1</b>) A point cloud map combining 20 frames of original data; (<b>a2</b>–<b>c2</b>) A point cloud map after filtering out low-precision points; (<b>a3</b>–<b>c3</b>) An effect map of DBSCAN identifying outliers, in which the red points are outliers; (<b>a4</b>–<b>c4</b>) Point cloud after filtering out outliers. The vertical direction is the y-axis, and the horizontal direction is the x-axis.</p>
Full article ">Figure 12
<p>The three sets of pictures. (<b>a</b>–<b>c</b>) correspond to three scenes. (<b>a1</b>–<b>c1</b>) Partial image of the real scene; (<b>a2</b>–<b>c2</b>) Outlier recognition result when RCS feature is not used, red is outlier; (<b>a3</b>–<b>c3</b>) Outlier recognition result when RCS feature is used, red is outlier.</p>
Full article ">Figure 12 Cont.
<p>The three sets of pictures. (<b>a</b>–<b>c</b>) correspond to three scenes. (<b>a1</b>–<b>c1</b>) Partial image of the real scene; (<b>a2</b>–<b>c2</b>) Outlier recognition result when RCS feature is not used, red is outlier; (<b>a3</b>–<b>c3</b>) Outlier recognition result when RCS feature is used, red is outlier.</p>
Full article ">Figure 13
<p>The maps and trajectories generated by the three improved CSM methods combining RCS features. (<b>a1</b>–<b>c1</b>) Maps generated by three methods. The three maps correspond to different physical quantities. (<b>a1</b>) corresponds to the RCS of the target, (<b>b1</b>) corresponds to the echo power of the target, and (<b>c1</b>) corresponds to the existence probability of the target. The dark blue corresponds to the minimum value of the physical quantity, and the light yellow corresponds to the maximum value of the physical quantity. (<b>a2</b>–<b>c2</b>) Trajectories generated by the three methods when only 180 frames of data are used. The blue line is the radar trajectory, and the black point is the radar point cloud.</p>
Full article ">Figure 13 Cont.
<p>The maps and trajectories generated by the three improved CSM methods combining RCS features. (<b>a1</b>–<b>c1</b>) Maps generated by three methods. The three maps correspond to different physical quantities. (<b>a1</b>) corresponds to the RCS of the target, (<b>b1</b>) corresponds to the echo power of the target, and (<b>c1</b>) corresponds to the existence probability of the target. The dark blue corresponds to the minimum value of the physical quantity, and the light yellow corresponds to the maximum value of the physical quantity. (<b>a2</b>–<b>c2</b>) Trajectories generated by the three methods when only 180 frames of data are used. The blue line is the radar trajectory, and the black point is the radar point cloud.</p>
Full article ">Figure 14
<p>The three figures (<b>a</b>–<b>c</b>) correspond to the trajectories of the three scenes.</p>
Full article ">Figure 15
<p>The three sets of pictures (<b>a</b>–<b>c</b>) correspond to the maps of the three scenes. Among them, (<b>a1</b>–<b>c1</b>) are OGM and (<b>a2</b>–<b>c2</b>) are R-OGM.</p>
Full article ">Figure 16
<p>Data volume of the system. The number of point clouds in the original data of the three scenes is compared, the number of point clouds after filtering out low-precision points, and the number of point clouds after filtering out low-precision points and outliers.</p>
Full article ">
8 pages, 1072 KiB  
Letter
Three-Dimensional Simulation of Particle-Induced Mode Splitting in Large Toroidal Microresonators
by Lei Chen, Cheng Li, Yumin Liu, Judith Su and Euan McLeod
Sensors 2020, 20(18), 5420; https://doi.org/10.3390/s20185420 - 22 Sep 2020
Cited by 7 | Viewed by 3073
Abstract
Whispering gallery mode resonators such as silica microtoroids can be used as sensitive biochemical sensors. One sensing modality is mode-splitting, where the binding of individual targets to the resonator breaks the degeneracy between clockwise and counter-clockwise resonant modes. Compared to other sensing modalities, [...] Read more.
Whispering gallery mode resonators such as silica microtoroids can be used as sensitive biochemical sensors. One sensing modality is mode-splitting, where the binding of individual targets to the resonator breaks the degeneracy between clockwise and counter-clockwise resonant modes. Compared to other sensing modalities, mode-splitting is attractive because the signal shift is theoretically insensitive to the polar coordinate where the target binds. However, this theory relies on several assumptions, and previous experimental and numerical results have shown some discrepancies with analytical theory. More accurate numerical modeling techniques could help to elucidate the underlying physics, but efficient 3D electromagnetic finite-element method simulations of large microtoroid (diameter ~90 µm) and their resonance features have previously been intractable. In addition, applications of mode-splitting often involve bacteria or viruses, which are too large to be accurately described by the existing analytical dipole approximation theory. A numerical simulation approach could accurately explain mode splitting induced by these larger particles. Here, we simulate mode-splitting in a large microtoroid using a beam envelope method with periodic boundary conditions in a wedge-shaped domain. We show that particle sizing is accurate to within 11% for radii a<λ/7, where the dipole approximation is valid. Polarizability calculations need only be based on the background media and need not consider the microtoroid material. This modeling approach can be applied to other sizes and shapes of microresonators in the future. Full article
(This article belongs to the Special Issue Optical Micro-Resonators for Sensing)
Show Figures

Figure 1

Figure 1
<p>Electric field norm distributions of the traveling transverse-electric (TE) counter-clockwise (CCW) mode inside a bare microtoroid simulated using (<b>a</b>) 2D axisymmetric method and (<b>b</b>) 3D eigenfrequency. Electric field norm distributions of the (<b>c</b>) symmetric (SM) mode and (<b>d</b>) antisymmetric (ASM) mode were simulated using a 3D eigenfrequency model. The perturbative polystyrene nanosphere has a radius of 50 nm and is positioned with a 10 nm radial gap between it and the microtoroid equator. (<b>e</b>) Theoretically simulated mode splitting transmission spectrum. The SM mode experiences a frequency redshift of <math display="inline"><semantics> <mrow> <mn>2</mn> <mi>g</mi> </mrow> </semantics></math> and a linewidth broadening <math display="inline"><semantics> <mo>Γ</mo> </semantics></math>, which is quantified by a full width at half maximum linewidth in Hz. The color bar for all electric field norm distributions is given in (<b>b</b>).</p>
Full article ">Figure 2
<p>Three-dimensional eigenfrequency simulation results of (<b>a</b>) the splitting frequency |<math display="inline"><semantics> <mrow> <mn>2</mn> <mi>g</mi> </mrow> </semantics></math>| and (<b>b</b>) linewidth broadening <math display="inline"><semantics> <mo>Γ</mo> </semantics></math> versus radius <math display="inline"><semantics> <mrow> <msub> <mi>a</mi> <mn>0</mn> </msub> </mrow> </semantics></math> in terms of two different background media: air with <math display="inline"><semantics> <mrow> <msqrt> <mrow> <msub> <mi>ε</mi> <mrow> <mi>b</mi> <mi>g</mi> </mrow> </msub> </mrow> </msqrt> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> and water with <math display="inline"><semantics> <mrow> <msqrt> <mrow> <msub> <mi>ε</mi> <mrow> <mi>b</mi> <mi>g</mi> </mrow> </msub> </mrow> </msqrt> <mo>=</mo> <mn>1.33</mn> </mrow> </semantics></math>. (<b>c</b>) Nanosphere polarizability versus radius <math display="inline"><semantics> <mrow> <msub> <mi>a</mi> <mn>0</mn> </msub> </mrow> </semantics></math>. Solid lines denote the analytical calculation using Equation (1) and stars denote numerical results derived from <math display="inline"><semantics> <mrow> <mo>Γ</mo> <mo>/</mo> <mn>2</mn> <mi>g</mi> <mo>=</mo> <mo>−</mo> <mi>α</mi> <msub> <mi>ω</mi> <mi>c</mi> </msub> <msup> <mrow/> <mn>3</mn> </msup> <msup> <mrow> <msqrt> <mrow> <msub> <mi>ε</mi> <mrow> <mi>b</mi> <mi>g</mi> </mrow> </msub> </mrow> </msqrt> </mrow> <mn>3</mn> </msup> <mo>/</mo> <mrow> <mo>(</mo> <mrow> <mn>6</mn> <mi>π</mi> <msup> <mi>c</mi> <mn>3</mn> </msup> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>. (<b>d</b>,<b>e</b>) Particle radius <math display="inline"><semantics> <mrow> <msub> <mi>a</mi> <mrow> <mi>FEM</mi> </mrow> </msub> </mrow> </semantics></math> derived from Equation (4). Solid lines indicate the true radius <math display="inline"><semantics> <mrow> <msub> <mi>a</mi> <mn>0</mn> </msub> </mrow> </semantics></math>. (<b>f</b>) Percent error of the sizing results calculated by <math display="inline"><semantics> <mrow> <mn>100</mn> <mo>×</mo> <mrow> <mo>(</mo> <mrow> <msub> <mi>a</mi> <mn>0</mn> </msub> <mo>−</mo> <msub> <mi>a</mi> <mrow> <mi>FEM</mi> </mrow> </msub> </mrow> <mo>)</mo> </mrow> <mo>/</mo> <msub> <mi>a</mi> <mn>0</mn> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>(<b>a</b>) Q-factors of the SM mode versus the radius <math display="inline"><semantics> <mrow> <msub> <mi>a</mi> <mn>0</mn> </msub> </mrow> </semantics></math>. (<b>b</b>) Diagram of the five microtoroid-particle binding cases where the particle lands with five different polar angles moving away from the energy maximum of the whispering-gallery mode (WGM). The electric field units are arbitrary.</p>
Full article ">
19 pages, 4450 KiB  
Article
Combining Laser-Induced Breakdown Spectroscopy (LIBS) and Visible Near-Infrared Spectroscopy (Vis-NIRS) for Soil Phosphorus Determination
by Sara Sánchez-Esteva, Maria Knadel, Sergey Kucheryavskiy, Lis W. de Jonge, Gitte H. Rubæk, Cecilie Hermansen and Goswin Heckrath
Sensors 2020, 20(18), 5419; https://doi.org/10.3390/s20185419 - 21 Sep 2020
Cited by 25 | Viewed by 4522
Abstract
Conventional wet chemical methods for the determination of soil phosphorus (P) pools, relevant for environmental and agronomic purposes, are labor-intensive. Therefore, alternative techniques are needed, and a combination of the spectroscopic techniques—in this case, laser-induced breakdown spectroscopy (LIBS)—and visible near-infrared spectroscopy (vis-NIRS) could [...] Read more.
Conventional wet chemical methods for the determination of soil phosphorus (P) pools, relevant for environmental and agronomic purposes, are labor-intensive. Therefore, alternative techniques are needed, and a combination of the spectroscopic techniques—in this case, laser-induced breakdown spectroscopy (LIBS)—and visible near-infrared spectroscopy (vis-NIRS) could be relevant. We aimed at exploring LIBS, vis-NIRS and their combination for soil P estimation. We analyzed 147 Danish agricultural soils with LIBS and vis-NIRS. As reference measurements, we analyzed water-extractable P (Pwater), Olsen P (Polsen), oxalate-extractable P (Pox) and total P (TP) by conventional wet chemical protocols, as proxies for respectively leachable, plant-available, adsorbed inorganic P, and TP in soil. Partial least squares regression (PLSR) models combined with interval partial least squares (iPLS) and competitive adaptive reweighted sampling (CARS) variable selection methods were tested, and the relevant wavelengths for soil P determination were identified. LIBS exhibited better results compared to vis-NIRS for all P models, except for Pwater, for which results were comparable. Model performance for both the LIBS and vis-NIRS techniques as well as the combined LIBS-vis-NIR approach was significantly improved when variable selection was applied. CARS performed better than iPLS in almost all cases. Combined LIBS and vis-NIRS models with variable selection showed the best results for all four P pools, except for Pox where the results were comparable to using the LIBS model with CARS. Merging LIBS and vis-NIRS with variable selection showed potential for improving soil P determinations, but larger and independent validation datasets should be tested in future studies. Full article
(This article belongs to the Section Biosensors)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Partial least squares regression and variable selection results of the best laser-induced breakdown spectroscopy (LIBS) models, presented as the predicted versus reference measurements for water-extractable P (<b>a</b>), Olsen P (<b>b</b>), Oxalate-extractable P (<b>c</b>) and total P (<b>d</b>) measurements. Standard root mean square error of cross validation (SRMSE) is also included in each subplot.</p>
Full article ">Figure 2
<p>Representative LIBS-corrected spectra with marked elements associated with their emission lines (<b>a</b>) and selected variable regions (<b>b</b>) by iPLS (white) and CARS (black) for Water-extractable P (Pw), Olsen P (Pol), Oxalate-extractable P (Pox) and Total P (TP).</p>
Full article ">Figure 3
<p>Partial least squares regression and variable selection results of the vis-NIRS models, presented as the predicted versus reference measurements for Water-extractable P (<b>a</b>), Olsen P (<b>b</b>), Oxalate-extractable P (<b>c</b>) and Total P (<b>d</b>) measurements. Standard root mean square error of cross validation (SRMSE) is also included in each subplot.</p>
Full article ">Figure 4
<p>Representative vis-NIRS-corrected spectra with associated absorption bands for the different bonds present in soil (<b>a</b>) and selected variable regions (<b>b</b>) by iPLS (white) and CARS (black) for water-extractable P (Pw), Olsen P (Pol), Oxalate-extractable P (Pox) and total P (TP).</p>
Full article ">Figure 5
<p>Partial least squares regression and variable selection results of the best LIBS-vis-NIRS models, presented as the predicted versus reference measurements for water-extractable P (<b>a</b>), Olsen P (<b>b</b>), Oxalate-extractable P (<b>c</b>) and total P (<b>d</b>) measurements. Standard root mean square error of cross validation (SRMSE) is also included in each subplot.</p>
Full article ">Figure 6
<p>RMSE results of the best LIBS, vis-NIRS and LIBS-vis-NIRS models for water-extractable P, Olsen P, Oxalate-extractable P and total P pools. The variable selection method used is specified on top of each bar.</p>
Full article ">Figure 7
<p>Representative LIBS-Vis-NIR-corrected spectra with elements marked above their associated emission lines and associated absorption bands for the different bonds present in soil (<b>a</b>) and selected variable regions (<b>b</b>) by iPLS (white) and CARS (black) for water-extractable P (Pw), Olsen P (Pol), Oxalate-extractable P (Pox) and total P (TP).</p>
Full article ">
Previous Issue
Back to TopTop