Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
sensors-logo

Journal Browser

Journal Browser

Sensor Intelligence through Neurocomputing

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (30 April 2023) | Viewed by 50079

Special Issue Editors


E-Mail Website
Guest Editor

E-Mail Website
Guest Editor
Alpen-Adria-Universität Klagenfurt, Department of Applied Informatics, Klagenfurt, Austria
Interests: machine learning; pattern recognition; image processing; data mining; video understanding; cognitive modeling and recognition
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Alpen-Adria-Universität Klagenfurt, Klagenfurt, Austria
Interests: machine learning; cognitive neuroscience; applied mathematics; machine vision
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Associate Professor, Department of Smart System Technologies, University of Klagenfurt, 9020 Klagenfurt, Austria
Interests: analog computing; dynamical systems; neuro-computing with applications in systems simulation and ultra-fast differential equations solving; nonlinear oscillatory theory with applications; traffic modeling and simulation; traffic telematics
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
FernUniversität in Hagen, Hagen, Germany
Interests: Computational Intelligence; Fuzzy Logics; Nonlinear Dynamics; Complex Systems; Chaos Theory; Power Electronics

E-Mail Website
Guest Editor
University of the Western Cape, ISAT Laboratory, Bellville, South Africa
Interests: internet-of-things; artificial intelligence; blockchain technologies; next generation networks
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear colleagues,

Sensor intelligence is a key enabler of better-performing, more energy-efficient, more robust (and thus more adaptive to environmental changes), much faster (and better fitting to any real-time constraints), and very data-rate-efficient sensor systems and/or sensor networks. Various highly innovative technical concepts of the future crucially need intelligent sensors. Examples of such contexts include cyber-physical systems, digital twins, IoT (Internet of Things), IIoT (Industrial Internet of Things), smart factories of the future, autonomous driving systems, smart online health systems of the future, etc.

However, sensor intelligence does face a series of difficult research challenges which need to be addressed by the relevant research community. Amongst the core scientific and technical challenges, efficient data compression, compressive sensing, and robust prediction or forecasting capability are some of the most prominent.

Selected keywords (not an exhaustive list):

  • Neurocomputing-based compressed sensing
  • Data compression schemes in sensors or in sensor networks
  • Data compression schemes for power reduction
  • Joint compressing and caching of data within sensor networks
  • On-board lossy compression schemes
  • Image-based compression of sensor data
  • Spatio-temporal data compression for sensor networks
  • Data compression and optimization schemes in cloud storage
  • Compressive sensing in the context of sensor data fusion
  • Edge machine learning w.r.t. to data compression and/or data forecasting
  • Combined compression of multiple data streams
  • Autoencoder techniques for robust and efficient data compression
  • Quantum data compression
  • Relational behavior forecasting from sensor data
  • Deep-learning-based forecasting of sensor data
  • Forecasting concepts w.r.t. or combined with tracking and state estimation
  • Data prediction in clustered sensor networks
  • Data-driven anomaly detection supported by prediction models
  • Traffic data forecasting (e.g., in transportation or in communication systems)
  • Sensor intelligence support of predictive maintenance
  • Sensor events prediction
 

Prof. Dr. Kyandoghere Kyamakya
Dr. Fadi Al-Machot
Dr. Ahmad Haj Mosa
Prof. Dr. Jean Chamberlain Chedjou
Prof. Dr. Zhong Li
Prof. Dr. Antoine Bagula
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

20 pages, 2137 KiB  
Article
Data Augmentation for a Virtual-Sensor-Based Nitrogen and Phosphorus Monitoring
by Thulane Paepae, Pitshou N. Bokoro and Kyandoghere Kyamakya
Sensors 2023, 23(3), 1061; https://doi.org/10.3390/s23031061 - 17 Jan 2023
Cited by 6 | Viewed by 1836
Abstract
To better control eutrophication, reliable and accurate information on phosphorus and nitrogen loading is desired. However, the high-frequency monitoring of these variables is economically impractical. This necessitates using virtual sensing to predict them by utilizing easily measurable variables as inputs. While the predictive [...] Read more.
To better control eutrophication, reliable and accurate information on phosphorus and nitrogen loading is desired. However, the high-frequency monitoring of these variables is economically impractical. This necessitates using virtual sensing to predict them by utilizing easily measurable variables as inputs. While the predictive performance of these data-driven, virtual-sensor models depends on the use of adequate training samples (in quality and quantity), the procurement and operational cost of nitrogen and phosphorus sensors make it impractical to acquire sufficient samples. For this reason, the variational autoencoder, which is one of the most prominent methods in generative models, was utilized in the present work for generating synthetic data. The generation capacity of the model was verified using water-quality data from two tributaries of the River Thames in the United Kingdom. Compared to the current state of the art, our novel data augmentation—including proper experimental settings or hyperparameter optimization—improved the root mean squared errors by 23–63%, with the most significant improvements observed when up to three predictors were used. In comparing the predictive algorithms’ performances (in terms of the predictive accuracy and computational cost), k-nearest neighbors and extremely randomized trees were the best-performing algorithms on average. Full article
(This article belongs to the Special Issue Sensor Intelligence through Neurocomputing)
Show Figures

Figure 1

Figure 1
<p>The virtual-sensing concept.</p>
Full article ">Figure 2
<p>A general overview of the research framework for the present work.</p>
Full article ">Figure 3
<p>The structural framework of a variational autoencoder.</p>
Full article ">Figure 4
<p>A single neuron.</p>
Full article ">Figure 5
<p>The architecture of a DNN.</p>
Full article ">Figure 6
<p>DNN learning curves for the River Enborne.</p>
Full article ">Figure 7
<p>(<b>a</b>) The original data-distribution plot for TRP versus NO<sub>3</sub> in the River Enborne and (<b>b</b>) the synthetic data distribution plot for TRP versus NO<sub>3</sub> in the River Enborne.</p>
Full article ">Figure 8
<p>(<b>a</b>) The NO<sub>3</sub> KDE plot for original data and (<b>b</b>) the NO<sub>3</sub> KDE plot for synthetic data.</p>
Full article ">
20 pages, 4622 KiB  
Article
Heart Rate Estimation from Incomplete Electrocardiography Signals
by Yawei Song, Jia Chen and Rongxin Zhang
Sensors 2023, 23(2), 597; https://doi.org/10.3390/s23020597 - 4 Jan 2023
Cited by 3 | Viewed by 3280
Abstract
As one of the most remarkable indicators of physiological health, heart rate (HR) has become an unfailing investigation for researchers. Unlike many existing methods, this article proposes an approach to implement short-time HR estimation from electrocardiography in time series missing patterns. Benefiting from [...] Read more.
As one of the most remarkable indicators of physiological health, heart rate (HR) has become an unfailing investigation for researchers. Unlike many existing methods, this article proposes an approach to implement short-time HR estimation from electrocardiography in time series missing patterns. Benefiting from the rapid development of deep learning, we adopted a bidirectional long short-term memory model (Bi-LSTM) and temporal convolution network (TCN) to recover complete heartbeat signals from those with durations are less than one cardiac cycle, and the estimated HR from recovered segment combining the input and the predicted output. We also compared the performance of Bi-LSTM and TCN in PhysioNet dataset. Validating the method over a resting heart rate range of 60–120 bpm in the database without significant arrhythmias and a corresponding range of 30–150 bpm in the database with arrhythmias, we found that networks provide an estimated approach for incomplete signals in a fixed format. These results are consistent with real heartbeats in the normal heartbeat dataset (γ > 0.7, RMSE < 10) and in the arrhythmia database (γ > 0.6, RMSE < 30), verifying that HR could be estimated by models in advance. We also discussed the short-time limits for the predictive model. It could be used for physiological purposes such as mobile sensing in time-constrained scenarios, and providing useful insights for better time series analyses in missing data patterns. Full article
(This article belongs to the Special Issue Sensor Intelligence through Neurocomputing)
Show Figures

Figure 1

Figure 1
<p>The diagram of the proposed iHR estimation approach employing time-series forecasting with CNN and RNN, and its main three steps: data preprocessing, sequence forecasting, and HR estimation. Following ECG signal normalization, a pretrained neural network would be able to generate time series for a short period of time in the future. We calculate a complete RRI and estimate HR in advance by combining the restored and original signals.</p>
Full article ">Figure 2
<p>System structure of the iHR estimation model using Bi-LSTM. The original signal as input for Bi-LSTM, and hidden units of LSTM capture temporal dependencies in time series. Dropout layer close to each Bi-LSTM layer to reduce overfitting. To generate HR estimation sequences, three layers of Bi-LSTM and dropout, an FC layer, and an output layer were stacked.</p>
Full article ">Figure 3
<p>The system structure of the iHR estimation model using TCN in AlexNet style. Stacking residual blocks with dilation casual convolution layers using skip connection (simplified and visualized by the black dash boxed areas), a generic TCN model would be designed for sequence modeling. The bottom boxed areas show an example of a residual connection in a TCN. Two solid black lines indicate that the input of the Dilated Conv1 module in the top boxed area corresponds to the input layer in the bottom boxed area, and the input of Dilated Conv2 module in the top boxed area corresponds to the block output layer in the bottom boxed area. In this example, a dilated causal convolution with dilation factors d = 1,2 and filter size k = 3, the origin lines are filters in the residual function, and the blue lines are identity mappings. Across layers via identity mappings, skip connections effectively mitigate the gradient problem in deep models. To generate forecasting sequences, we also use an FC layer and output layer.</p>
Full article ">Figure 4
<p>Examples of data segmenting of the recorded ECG. The time-leading parts are displayed in blue and the time-lagged part are displayed in orange. The elements between the time-leading parts and the time-lagged parts (grey) are deleted.</p>
Full article ">Figure 5
<p>Prediction results of two sequences (<b>a</b>,<b>b</b>). Black lines denote the input sequence and green dash lines denote the ground truth ECG. The red imaginary lines denote the predicted ECG from the proposed method. The change in shape does not affect the determination of the extremum point.</p>
Full article ">Figure 6
<p>Results comparison on inputs of different lengths. (<b>a</b>) The absolute value of mean error; (<b>b</b>) standard deviation; (<b>c</b>) mean absolute percentage error; (<b>d</b>) root mean square error; and (<b>e</b>) Pearson correlation coefficient. The red line denotes the predicted results of CNN models and the blue line denotes the results of RNN models. As the time duration of input fell from 0.5 s to 0.2 s, a high, similar trend was demonstrated in Bi-LSTM and TCN.</p>
Full article ">Figure 7
<p>The statistical histograms of real HR and estimated HR in the test set. The red bar represents the ground truth HR, and the blue bar represents the predicted values by TCN. We also plot the kernel density curves of real HR and estimated HR with the red line and blue line, respectively, in the upper right corner.</p>
Full article ">
12 pages, 1751 KiB  
Article
A Novel Zernike Moment-Based Real-Time Head Pose and Gaze Estimation Framework for Accuracy-Sensitive Applications
by Hima Deepthi Vankayalapati, Swarna Kuchibhotla, Mohan Sai Kumar Chadalavada, Shashi Kant Dargar, Koteswara Rao Anne and Kyandoghere Kyamakya
Sensors 2022, 22(21), 8449; https://doi.org/10.3390/s22218449 - 3 Nov 2022
Cited by 3 | Viewed by 1899
Abstract
A real-time head pose and gaze estimation (HPGE) algorithm has excellent potential for technological advancements either in human–machine or human–robot interactions. For example, in high-accuracy advent applications such as Driver’s Assistance System (DAS), HPGE plays a crucial role in omitting accidents and road [...] Read more.
A real-time head pose and gaze estimation (HPGE) algorithm has excellent potential for technological advancements either in human–machine or human–robot interactions. For example, in high-accuracy advent applications such as Driver’s Assistance System (DAS), HPGE plays a crucial role in omitting accidents and road hazards. In this paper, the authors propose a new hybrid framework for improved estimation by combining both the appearance and geometric-based conventional methods to extract local and global features. Therefore, the Zernike moments algorithm has been prominent in extracting rotation, scale, and illumination invariant features. Later, conventional discriminant algorithms were used to classify the head poses and gaze direction. Furthermore, the experiments were performed on standard datasets and real-time images to analyze the accuracy of the proposed algorithm. As a result, the proposed framework has immediately estimated the range of direction changes under different illumination conditions. We obtained an accuracy of ~85%; the average response time was 21.52 and 7.483 ms for estimating head poses and gaze, respectively, independent of illumination, background, and occlusion. The proposed method is promising for future developments of a robust system that is invariant even to blurring conditions and thus reaching much more significant performance enhancement. Full article
(This article belongs to the Special Issue Sensor Intelligence through Neurocomputing)
Show Figures

Figure 1

Figure 1
<p>Overall head pose and gaze direction estimation system.</p>
Full article ">Figure 2
<p>Sample head pose and gaze images.</p>
Full article ">Figure 3
<p>Estimation of the head pose for unknown images for different illumination conditions.</p>
Full article ">Figure 4
<p>Estimation of head pose for occluded images. (<b>a</b>) Right pose; (<b>b</b>) Right pose with 45°; (<b>c</b>) Front pose; (<b>d</b>) Left pose with glasses; (<b>e</b>) Left pose.</p>
Full article ">Figure 5
<p>Estimation of gaze direction for varying illumination and occlusion conditions.</p>
Full article ">
17 pages, 2061 KiB  
Article
Short-Term Solar Irradiance Prediction Based on Adaptive Extreme Learning Machine and Weather Data
by Ahmad Alzahrani
Sensors 2022, 22(21), 8218; https://doi.org/10.3390/s22218218 - 27 Oct 2022
Cited by 5 | Viewed by 1908
Abstract
Concerns over fossil fuels and depletable energy sources have motivated renewable energy sources utilization, such as solar photovoltaic (PV) power. Utilities have started penetrating the existing primary grid with renewable energy sources. However, penetrating the grid with photovoltaic energy sources degrades the stability [...] Read more.
Concerns over fossil fuels and depletable energy sources have motivated renewable energy sources utilization, such as solar photovoltaic (PV) power. Utilities have started penetrating the existing primary grid with renewable energy sources. However, penetrating the grid with photovoltaic energy sources degrades the stability of the whole system because photovoltaic power depends on solar irradiance, which is highly intermittent. This paper proposes a prediction method for non-stationary solar irradiance. The proposed method uses an adaptive extreme learning machine. The extreme learning machine method uses approximated sigmoid and hyper-tangent functions to ensure faster computational time and more straightforward microcontroller implementation. The proposed method is analyzed using the hourly weather data from a specific site at Najran University. The data are preprocessed, trained, tested, and validated. Several evaluation metrics, such as the root mean square error, mean square error, and mean absolute error, are used to evaluate and compare the proposed method with other recently introduced approaches. The results show that the proposed method can be used to predict solar irradiance with high accuracy, as the mean square error is 0.1727. The proposed approach is implemented using a solar irradiance sensor made of a PV cell, a temperature sensor, and a low-cost microcontroller. Full article
(This article belongs to the Special Issue Sensor Intelligence through Neurocomputing)
Show Figures

Figure 1

Figure 1
<p>Site location used in this study. (<b>a</b>) Najran University; (<b>b</b>) Solar and wind weather station.</p>
Full article ">Figure 2
<p>Monthly weather and diffused irradiance data of the Najran university. (<b>a</b>) The diffused horizontal irradiance. (<b>b</b>) Air temperature. (<b>c</b>) Peak wind speed. (<b>d</b>) Relative humidity.</p>
Full article ">Figure 3
<p>Normalization energy sources data at the Najran university. (<b>a</b>) The global horizontal irradiance in Wh/m<math display="inline"><semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics></math>. (<b>b</b>) The average wind speed in [m/s] vs. wind direction.</p>
Full article ">Figure 4
<p>The standard deviation and uncertainty of the monthly (<b>a</b>) GHI (<b>b</b>) DHI.</p>
Full article ">Figure 5
<p>Single-layer extreme learning machine.</p>
Full article ">Figure 6
<p>Extreme Learning Machine with time series input and sliding windows.</p>
Full article ">Figure 7
<p>Feed forward neural network trained with PSO.</p>
Full article ">Figure 8
<p>The prediction methodology, which consists of three stages.</p>
Full article ">Figure 9
<p>The data cleaning process (<b>a</b>) the data before eliminating night hours (<b>b</b>) the data after eliminating night hours.</p>
Full article ">Figure 10
<p>Padé approximation of the activation function. (<b>a</b>) Tanh activation function. (<b>b</b>) Sigmoid function.</p>
Full article ">Figure 11
<p>The data used in training and testing. (<b>a</b>) The GHI for a whole year. (<b>b</b>) The GHI for five working days. (<b>c</b>) The temperature in degree celsius. (<b>d</b>) Relative humidity in percentage.</p>
Full article ">Figure 12
<p>The predicted samples of the ARMA, FNN-PSO, and the proposed method compared to the actual values of GHI.</p>
Full article ">Figure 13
<p>The predicted samples of the proposed method compared to the actual values of GHI. The samples are recorded every 15 min.</p>
Full article ">Figure 14
<p>The error in samples prediction of the GHI, which is the difference between the proposed method’s output and the GHI’s actual values. Note that the samples are concentrated near 0, and the maximum error is less than 250 <math display="inline"><semantics> <mfrac> <mi mathvariant="normal">W</mi> <msup> <mi mathvariant="normal">m</mi> <mn>2</mn> </msup> </mfrac> </semantics></math>.</p>
Full article ">Figure 15
<p>The experimental results of GHI. The predicted results (dots) are read every 15 min and compared to the actual GHI (solid line).</p>
Full article ">
20 pages, 1046 KiB  
Article
A Virtual Sensing Concept for Nitrogen and Phosphorus Monitoring Using Machine Learning Techniques
by Thulane Paepae, Pitshou N. Bokoro and Kyandoghere Kyamakya
Sensors 2022, 22(19), 7338; https://doi.org/10.3390/s22197338 - 27 Sep 2022
Cited by 10 | Viewed by 2428
Abstract
Harmful cyanobacterial bloom (HCB) is problematic for drinking water treatment, and some of its strains can produce toxins that significantly affect human health. To better control eutrophication and HCB, catchment managers need to continuously keep track of nitrogen (N) and phosphorus (P) in [...] Read more.
Harmful cyanobacterial bloom (HCB) is problematic for drinking water treatment, and some of its strains can produce toxins that significantly affect human health. To better control eutrophication and HCB, catchment managers need to continuously keep track of nitrogen (N) and phosphorus (P) in the water bodies. However, the high-frequency monitoring of these water quality indicators is not economical. In these cases, machine learning techniques may serve as viable alternatives since they can learn directly from the available surrogate data. In the present work, a random forest, extremely randomized trees (ET), extreme gradient boosting, k-nearest neighbors, a light gradient boosting machine, and bagging regressor-based virtual sensors were used to predict N and P in two catchments with contrasting land uses. The effect of data scaling and missing value imputation were also assessed, while the Shapley additive explanations were used to rank feature importance. A specification book, sensitivity analysis, and best practices for developing virtual sensors are discussed. Results show that ET, MinMax scaler, and a multivariate imputer were the best predictive model, scaler, and imputer, respectively. The highest predictive performance, reported in terms of R2, was 97% in the rural catchment and 82% in an urban catchment. Full article
(This article belongs to the Special Issue Sensor Intelligence through Neurocomputing)
Show Figures

Figure 1

Figure 1
<p>A working principle of the virtual sensing system.</p>
Full article ">Figure 2
<p>A boxplot showing conductivity outliers in The Cut.</p>
Full article ">Figure 3
<p>(<b>a</b>) Spot checking nitrate predictive performance in the River Enborne; (<b>b</b>) Spot checking nitrate predictive performance in The Cut.</p>
Full article ">
24 pages, 9449 KiB  
Article
A Smart Visual Sensing Concept Involving Deep Learning for a Robust Optical Character Recognition under Hard Real-World Conditions
by Kabeh Mohsenzadegan, Vahid Tavakkoli and Kyandoghere Kyamakya
Sensors 2022, 22(16), 6025; https://doi.org/10.3390/s22166025 - 12 Aug 2022
Cited by 1 | Viewed by 3384
Abstract
In this study, we propose a new model for optical character recognition (OCR) based on both CNNs (convolutional neural networks) and RNNs (recurrent neural networks). The distortions affecting the document image can take different forms, such as blur (focus blur, motion blur, etc.), [...] Read more.
In this study, we propose a new model for optical character recognition (OCR) based on both CNNs (convolutional neural networks) and RNNs (recurrent neural networks). The distortions affecting the document image can take different forms, such as blur (focus blur, motion blur, etc.), shadow, bad contrast, etc. Document-image distortions significantly decrease the performance of OCR systems, to the extent that they reach a performance close to zero. Therefore, a robust OCR model that performs robustly even under hard (distortion) conditions is still sorely needed. However, our comprehensive study in this paper shows that various related works can somewhat improve their respective OCR recognition performance of degraded document images (e.g., captured by smartphone cameras under different conditions and, thus, distorted by shadows, contrast, blur, etc.), but it is worth underscoring, that improved recognition is neither sufficient nor always satisfactory—especially in very harsh conditions. Therefore, in this paper, we suggest and develop a much better and fully different approach and model architecture, which significantly outperforms the aforementioned previous related works. Furthermore, a new dataset was gathered to show a series of different and well-representative real-world scenarios of hard distortion conditions. The new OCR model suggested performs in such a way that even document images (even from the hardest conditions) that were previously not recognizable by other OCR systems can be fully recognized with up to 97.5% accuracy/precision by our new deep-learning-based OCR model. Full article
(This article belongs to the Special Issue Sensor Intelligence through Neurocomputing)
Show Figures

Figure 1

Figure 1
<p>Optical character recognition (OCR).</p>
Full article ">Figure 2
<p>The two different methods to define the text “boundary box”.</p>
Full article ">Figure 3
<p>Main distortion problems encountered in document images: (<b>a</b>) A document photo, as usually taken from a smartphone with a low amount of light, causing the noise intensity to increase. (<b>b</b>) A document image with shadow; some parts are unreadable, whereas some parts are still easily readable with the naked eye; (<b>c</b>) A document image with blur; the text in this image is barely recognizable with the naked eye. Source: our own images.</p>
Full article ">Figure 4
<p>Our new global model, composed of (<b>a</b>) text detection (Module 1) and (<b>b</b>) text recognition (Module 2).</p>
Full article ">Figure 5
<p>Our detailed architecture for Module 1 indicated in <a href="#sensors-22-06025-f004" class="html-fig">Figure 4</a>. The document detection model contains four parts or sections: (<b>A</b>) Feature extraction using pre-trained ResNet 101; (<b>B</b>) feature fusion layers using concatenation to merge results and process them using residual blocks; (<b>C</b>) output layers, which contain the score values and quad polygons of boundary boxes; and (<b>D</b>) a non-max suppression layer.</p>
Full article ">Figure 6
<p>These are 100 <b><span class="underline">“very bad quality”</span></b> representative sample data (an extract from a much bigger dataset). Note: since the images contain some personal data, those parts are covered by black rectangles for privacy reasons.</p>
Full article ">Figure 7
<p>These are 100 <b><span class="underline">“bad quality” sample</span></b> data extracted from our own dataset (an extract from a much bigger dataset). Note: since the images contain some personal data, those parts are covered by black rectangles for privacy reasons.</p>
Full article ">Figure 8
<p>These are 100 <b><span class="underline">“middle quality” sample</span></b> data extracted from our own dataset (an extract from a much bigger dataset). Note: since the images contain some personal data, those parts are covered by black rectangles for privacy reasons.</p>
Full article ">Figure 9
<p>These are 100 <b><span class="underline">“good quality”</span></b> sample data extracted from our own dataset (an extract from a much bigger dataset). Note: since the images contain some personal data, those parts are covered by black rectangles for privacy reasons.</p>
Full article ">Figure 10
<p>These are 100 <b><span class="underline">“very good quality</span></b>” sample data extracted from our own dataset (an extract from a much bigger dataset). Note: since the images contain some personal data, those parts are covered by black rectangles for privacy reasons.</p>
Full article ">Figure 11
<p>Our new text recognition architecture for Module 2 (of the architecture shown in <a href="#sensors-22-06025-f004" class="html-fig">Figure 4</a>): This module starts with (<b>A</b>) preprocessing layers, and continues with (<b>B</b>) feature extraction of the text image. In the middle, the model uses residual layers and LSTM with an attention mechanism to perform feature fusion. Finally, the model uses those features to find/determine a word.</p>
Full article ">Figure 12
<p>The evolution of both training and validation performance over the epochs; here, we show the first 200 epochs.</p>
Full article ">Figure 13
<p>Sample German words generated by our Python module for training the text recognition for German words.</p>
Full article ">Figure 14
<p>The evolution of both training and validation losses over the first 100 epochs.</p>
Full article ">Figure 15
<p>Samples of detected text bounding boxes obtained using our Module 1 as shown in <a href="#sensors-22-06025-f005" class="html-fig">Figure 5</a>. The detected text boxes are marked with colored rectangles: (<b>a</b>) The text detection under normal conditions, i.e., very small or almost no distortion. (<b>b</b>) The text detection with contrast problems. (<b>c</b>) The text detection with shadow problems. (<b>d</b>) The text detection with noise and rotation problems.</p>
Full article ">Figure 16
<p>Samples of text recognition inputs obtained by using our Module 1, as shown in <a href="#sensors-22-06025-f005" class="html-fig">Figure 5</a>. The detected text images were cropped from the input image and then used as inputs of the second module to recognize the text information contained therein.</p>
Full article ">Figure 17
<p>Sample of text recognition using Tesseract (open-source OCR system) and our novel OCR model. As we can see, most of the text samples are recognized in these images, but Tesseract cannot read them.</p>
Full article ">
21 pages, 3159 KiB  
Article
Real-Time Anomaly Detection for an ADMM-Based Optimal Transmission Frequency Management System for IoT Devices
by Hongde Wu, Noel E. O’Connor, Jennifer Bruton, Amy Hall and Mingming Liu
Sensors 2022, 22(16), 5945; https://doi.org/10.3390/s22165945 - 9 Aug 2022
Cited by 4 | Viewed by 3484
Abstract
In this paper, we investigate different scenarios of anomaly detection on decentralised Internet of Things (IoT) applications. Specifically, an anomaly detector is devised to detect different types of anomalies for an IoT data management system, based on the decentralised alternating direction method of [...] Read more.
In this paper, we investigate different scenarios of anomaly detection on decentralised Internet of Things (IoT) applications. Specifically, an anomaly detector is devised to detect different types of anomalies for an IoT data management system, based on the decentralised alternating direction method of multipliers (ADMM), which was proposed in our previous work. The anomaly detector only requires limited information from the IoT system, and can be operated using both a mathematical-rule-based approach and the deep learning approach proposed in the paper. Our experimental results show that detection based on mathematical approach is simple to implement, but it also comes with lower detection accuracy (78.88%). In contrast, the deep-learning-enabled approach can easily achieve a higher detection accuracy (96.28%) in the real world working environment. Full article
(This article belongs to the Special Issue Sensor Intelligence through Neurocomputing)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the system architecture [<a href="#B10-sensors-22-05945" class="html-bibr">10</a>].</p>
Full article ">Figure 2
<p>System implementation flowchart [<a href="#B10-sensors-22-05945" class="html-bibr">10</a>].</p>
Full article ">Figure 3
<p>Anomaly detection and response process.</p>
Full article ">Figure 4
<p>The proposed rule-based detection process.</p>
Full article ">Figure 5
<p>An example to illustrate the change of transmission pattern due to the manipulation of device 1.</p>
Full article ">Figure 6
<p>System device setup [<a href="#B10-sensors-22-05945" class="html-bibr">10</a>].</p>
Full article ">Figure 7
<p>The <span class="html-italic">z</span> values of devices 1, 2, and 3 under different anomalies in the SS.</p>
Full article ">Figure 8
<p>The <span class="html-italic">z</span> values of devices 1, 2, and 3 in the RS, including both normal and abnormal situations. General two-class detection results from the rule-based method (turquoise dotted line) and LSTM (red dotted line) are compared with the ground truth (black line).</p>
Full article ">Figure 9
<p>The <span class="html-italic">z</span> values of devices 1, 2, and 3 during the update process in the PS. The misalignments, fluctuations, and jumps can affect the performance of detection when detecting anomalies in real-world applications.</p>
Full article ">Figure 10
<p>Confusion matrix for LSTM and rule-based detection on general two-class detection in the RS.</p>
Full article ">Figure 11
<p>Four-class anomaly detection results from different deep learning architectures.</p>
Full article ">
24 pages, 7471 KiB  
Article
Development of a Smart Chair Sensors System and Classification of Sitting Postures with Deep Learning Algorithms
by Taraneh Aminosharieh Najafi, Antonio Abramo, Kyandoghere Kyamakya and Antonio Affanni
Sensors 2022, 22(15), 5585; https://doi.org/10.3390/s22155585 - 26 Jul 2022
Cited by 14 | Viewed by 9342
Abstract
Nowadays in modern societies, a sedentary lifestyle is almost inevitable for a majority of the population. Long hours of sitting, especially in wrong postures, may result in health complications. A smart chair with the capability to identify sitting postures can help reduce health [...] Read more.
Nowadays in modern societies, a sedentary lifestyle is almost inevitable for a majority of the population. Long hours of sitting, especially in wrong postures, may result in health complications. A smart chair with the capability to identify sitting postures can help reduce health risks induced by a modern lifestyle. This paper presents the design, realization and evaluation of a new smart chair sensors system capable of sitting postures identification. The system consists of eight pressure sensors placed on the chair’s sitting cushion and the backrest. A signal acquisition board was designed from scratch to acquire data generated by the pressure sensors and transmit them via a Wi-Fi network to a purposely developed graphical user interface which monitors and stores the acquired sensors’ data on a computer. The designed system was tested by means of an extensive sitting experiment involving 40 subjects, and from the acquired data, the classification of the respective sitting postures out of eight possible postures was performed. Hereby, the performance of seven deep-learning algorithms was assessed. The best accuracy of 91.68% was achieved by an echo memory network model. The designed smart chair sensors system is simple and versatile, low cost and accurate, and it can easily be deployed in several smart chair environments, both for public and private contexts. Full article
(This article belongs to the Special Issue Sensor Intelligence through Neurocomputing)
Show Figures

Figure 1

Figure 1
<p>The structure of the proposed Smart chair posture recognition system.</p>
Full article ">Figure 2
<p>The Smart Chair sensors system: (<b>a</b>) block diagram of the sensors system circuit. Analog front-end has seat and backrest circuits that convert the sensors signals <math display="inline"><semantics> <msub> <mi>R</mi> <mrow> <mi>I</mi> <mi>N</mi> </mrow> </msub> </semantics></math> to voltage. DSP converts the analog voltage signals to digital data packets. Wi-Fi module transmits the TCP packets to the computer; (<b>b</b>) top view of the chair equipped with labeled FSR sensors.</p>
Full article ">Figure 3
<p>The Smart Chair sensors system circuit, battery, and enclosure: (<b>a</b>) circuit top layer including the analog front-end circuits; (<b>b</b>) circuit bottom layer (PCB inserted into the enclosure) including the power supply, DSP, Wi-Fi module circuits; (<b>c</b>) the enclosure hosting the battery.</p>
Full article ">Figure 4
<p>GUI front panel developed in LabVIEW™ environment for data acquisition and real-time plot. The sensor signals evolution in time are plotted on the graphs; the top and bottom graphs show the back and seat sensor signals, respectively. The real time sensor values are shown on a top view image of the chair in colored numbers. Controls are located on the left side of the front panel.</p>
Full article ">Figure 5
<p>The eight sitting postures: (<b>a</b>) P1: upright; (<b>b</b>) P2: slouching; (<b>c</b>) P3: bending forward; (<b>d</b>) P4: bending backwards; (<b>e</b>) P5: bending left; (<b>f</b>) P6: bending right; (<b>g</b>) P7: right leg above; (<b>h</b>) P8: left leg above.</p>
Full article ">Figure 6
<p>The adopted MLP model architecture. It consists of an input layer with 8 neurons for 8 sensors signals; a hidden layer with 30 neurons, and an output layer with 8 neurons for 8 postures classification.</p>
Full article ">Figure 7
<p>The adopted CNN model architecture. Input has a 3D shape. Hidden layers are Conv1D with 16 filters, dropout, Max pooling, flatten, and dense layer with 30 neurons. The output layer has 8 neurons for 8 postures classification.</p>
Full article ">Figure 8
<p>The adopted LSTM model architecture. Input has a 3D shape, hidden layers are LSTM with 200 units, dropout, and a dense layer with 200 neurons. The output layer has 8 neurons for 8 posture classification.</p>
Full article ">Figure 9
<p>The adopted EMN model architecture. Input has 3D shape, hidden layers consists of an encoder and a decoder. The encoder converts the 3D inputs to echo memory matrices; the decoder consists of the fusion of the features extracted from the 8 sensors signals matrices by CNN2D layer, together with the max pooling and dropout layers. The output layer has 8 neurons for 8 postures classification.</p>
Full article ">Figure 10
<p>The linearity error of the sensor analog front-end. Error bars represent the uncertainty on linearity error.</p>
Full article ">Figure 11
<p>Step response of the subsystem sensor analog front-end. Lines indicate the <math display="inline"><semantics> <msub> <mi>V</mi> <mrow> <mi>A</mi> <mi>D</mi> </mrow> </msub> </semantics></math> crossing the <math display="inline"><semantics> <mrow> <mn>0</mn> <mo>%</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mn>10</mn> <mo>%</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mn>63</mn> <mo>%</mo> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mn>90</mn> <mo>%</mo> </mrow> </semantics></math> values of its asymptotic range.</p>
Full article ">Figure 12
<p>The structure of the 5-fold cross-validation procedure for train and test set splits of the 40 subject dataset.</p>
Full article ">Figure 13
<p>Confusion matrices of 8 postures classification results for 7 DL models, evaluated by 5-fold cross validation. All CMs show the result of k = 1 validation.</p>
Full article ">Figure 14
<p>Training and testing loss of the EMN model in all the 5-fold cross validation along 100 epochs. Orange lines show the test losses while the blue lines show the training losses.</p>
Full article ">Figure 15
<p>The raw signals of the pressure sensors as obtained from six different subjects sitting in Posture 1 are shown in (<b>a</b>–<b>f</b>). The weight distributions of the subjects on sensors have different patterns, challenging the models in recognizing them as the same posture.</p>
Full article ">Figure 16
<p>Pressure sensors raw signals during three repetitions of the same subject sitting in Posture 1. The weight distributions of the subject on sensors have different patterns in each repetition, challenging the models in recognizing them as the same posture.</p>
Full article ">Figure 17
<p>Posture 2 and Posture 3 of different subjects can have similar patterns: (<b>a</b>,<b>c</b>) are pressure sensors raw signals of two different subjects while sitting in Posture 2. They are similar to (<b>b</b>,<b>d</b>), respectively, which are pressure sensors raw signals of other two subjects while sitting in Posture 3.</p>
Full article ">
18 pages, 497 KiB  
Article
Energy Consumption Forecasting for Smart Meters Using Extreme Learning Machine Ensemble
by Paulo S. G. de Mattos Neto, João F. L. de Oliveira, Priscilla Bassetto, Hugo Valadares Siqueira, Luciano Barbosa, Emilly Pereira Alves, Manoel H. N. Marinho, Guilherme Ferretti Rissi and Fu Li
Sensors 2021, 21(23), 8096; https://doi.org/10.3390/s21238096 - 3 Dec 2021
Cited by 14 | Viewed by 5365
Abstract
The employment of smart meters for energy consumption monitoring is essential for planning and management of power generation systems. In this context, forecasting energy consumption is a valuable asset for decision making, since it can improve the predictability of forthcoming demand to energy [...] Read more.
The employment of smart meters for energy consumption monitoring is essential for planning and management of power generation systems. In this context, forecasting energy consumption is a valuable asset for decision making, since it can improve the predictability of forthcoming demand to energy providers. In this work, we propose a data-driven ensemble that combines five single well-known models in the forecasting literature: a statistical linear autoregressive model and four artificial neural networks: (radial basis function, multilayer perceptron, extreme learning machines, and echo state networks). The proposed ensemble employs extreme learning machines as the combination model due to its simplicity, learning speed, and greater ability of generalization in comparison to other artificial neural networks. The experiments were conducted on real consumption data collected from a smart meter in a one-step-ahead forecasting scenario. The results using five different performance metrics demonstrate that our solution outperforms other statistical, machine learning, and ensembles models proposed in the literature. Full article
(This article belongs to the Special Issue Sensor Intelligence through Neurocomputing)
Show Figures

Figure 1

Figure 1
<p>Model of the proposed ensemble.</p>
Full article ">Figure 2
<p>Stages of preprocessing and postprocessing employed in the modeling of the forecasting method.</p>
Full article ">Figure 3
<p>Boxplot graphic.</p>
Full article ">Figure 4
<p>Energy consumption forecasting obtained by the ELM and ensemble ELM.</p>
Full article ">
15 pages, 2200 KiB  
Article
Sentimental Analysis of COVID-19 Related Messages in Social Networks by Involving an N-Gram Stacked Autoencoder Integrated in an Ensemble Learning Scheme
by Venkatachalam Kandasamy, Pavel Trojovský, Fadi Al Machot, Kyandoghere Kyamakya, Nebojsa Bacanin, Sameh Askar and Mohamed Abouhawwash
Sensors 2021, 21(22), 7582; https://doi.org/10.3390/s21227582 - 15 Nov 2021
Cited by 23 | Viewed by 3307
Abstract
The current population worldwide extensively uses social media to share thoughts, societal issues, and personal concerns. Social media can be viewed as an intelligent platform that can be augmented with a capability to analyze and predict various issues such as business needs, environmental [...] Read more.
The current population worldwide extensively uses social media to share thoughts, societal issues, and personal concerns. Social media can be viewed as an intelligent platform that can be augmented with a capability to analyze and predict various issues such as business needs, environmental needs, election trends (polls), governmental needs, etc. This has motivated us to initiate a comprehensive search of the COVID-19 pandemic-related views and opinions amongst the population on Twitter. The basic training data have been collected from Twitter posts. On this basis, we have developed research involving ensemble deep learning techniques to reach a better prediction of the future evolutions of views in Twitter when compared to previous works that do the same. First, feature extraction is performed through an N-gram stacked autoencoder supervised learning algorithm. The extracted features are then involved in a classification and prediction involving an ensemble fusion scheme of selected machine learning techniques such as decision tree (DT), support vector machine (SVM), random forest (RF), and K-nearest neighbour (KNN). all individual results are combined/fused for a better prediction by using both mean and mode techniques. Our proposed scheme of an N-gram stacked encoder integrated in an ensemble machine learning scheme outperforms all the other existing competing techniques such unigram autoencoder, bigram autoencoder, etc. Our experimental results have been obtained from a comprehensive evaluation involving a dataset extracted from open-source data available from Twitter that were filtered by using the keywords “covid”, “covid19”, “coronavirus”, “covid-19”, “sarscov2”, and “covid_19”. Full article
(This article belongs to the Special Issue Sensor Intelligence through Neurocomputing)
Show Figures

Figure 1

Figure 1
<p>Proposed architecture for N-Gram Stacked Auto-Encoder.</p>
Full article ">Figure 2
<p>Framework of data preprocessing.</p>
Full article ">Figure 3
<p>Feature Extraction process Architecture.</p>
Full article ">Figure 4
<p>Accuracy of ensemble models (DT, SVM, RF, and KNN) in predicting COVID-19 severity with N-gram stacked autoencoders.</p>
Full article ">Figure 5
<p>Error rate of the N-gram stack autoencoders.</p>
Full article ">Figure 6
<p>ROC for SVM, KNN, RF, and DT classifiers.</p>
Full article ">
21 pages, 7518 KiB  
Article
A Deep-Learning Based Visual Sensing Concept for a Robust Classification of Document Images under Real-World Hard Conditions
by Kabeh Mohsenzadegan, Vahid Tavakkoli and Kyandoghere Kyamakya
Sensors 2021, 21(20), 6763; https://doi.org/10.3390/s21206763 - 12 Oct 2021
Cited by 7 | Viewed by 2526
Abstract
This paper’s core objective is to develop and validate a new neurocomputing model to classify document images in particularly demanding hard conditions such as image distortions, image size variance and scale, a huge number of classes, etc. Document classification is a special machine [...] Read more.
This paper’s core objective is to develop and validate a new neurocomputing model to classify document images in particularly demanding hard conditions such as image distortions, image size variance and scale, a huge number of classes, etc. Document classification is a special machine vision task in which document images are categorized according to their likelihood. Document classification is by itself an important topic for the digital office and it has several usages. Additionally, different methods for solving this problem have been presented in various studies; their respectively reached performance is however not yet good enough. This task is very tough and challenging. Thus, a novel, more accurate and precise model is needed. Although the related works do reach acceptable accuracy values for less hard conditions, they generally fully fail in the face of those above-mentioned hard, real-world conditions, including, amongst others, distortions such as noise, blur, low contrast, and shadows. In this paper, a novel deep CNN model is developed, validated and benchmarked with a selection of the most relevant recent document classification models. Additionally, the model’s sensitivity was significantly improved by injecting different artifacts during the training process. In the benchmarking, it does clearly outperform all others by at least 4%, thus reaching more than 96% accuracy. Full article
(This article belongs to the Special Issue Sensor Intelligence through Neurocomputing)
Show Figures

Figure 1

Figure 1
<p>Document classification-related general processing pipe. The input of the classifier is a document image, and the classifier output is the estimated type/label of the input document.</p>
Full article ">Figure 2
<p>Main problems that can be encountered in document images: (<b>a</b>) Showing a document photo is usually taken from a mobile phone; (<b>b</b>) Showing a document image with motion blur; (<b>c</b>) Example of a document image delivering multiple documents within one single image shot; and (<b>d</b>) Example of a document image with a spotlight, which is blocking/disturbing reading the content. (Source: our own pictures).</p>
Full article ">Figure 3
<p>The novel global model is composed of two modules: (<b>a</b>) a Document Detection; and (<b>b</b>) a Document Classifier (Source: our own images).</p>
Full article ">Figure 4
<p>The document detection model contains three parts or sections: (<b>A</b>) Feature extraction based on ResNet101; (<b>B</b>) Further feature extraction layers; (<b>C</b>) Output layers; and (<b>D</b>) a Non-Max Suppression layer.</p>
Full article ">Figure 5
<p>The “Model I” developed for document image classification.</p>
Full article ">Figure 6
<p>The “Model II” developed for document image classification.</p>
Full article ">Figure 7
<p>Effect of different blur filters on the image. For a better demonstration, a canny filter is applied on each blurred image to show image differences: (<b>a</b>) Blur filter with a kernel size of 3 × 3; (<b>b</b>) Blur filter with a kernel size of 5 × 5; and (<b>c</b>) blur filter with a kernel size of 7 × 7.</p>
Full article ">Figure 8
<p>The “Model III” developed for document image classification.</p>
Full article ">Figure 9
<p>Example of Gabor filter outputs while playing with parameters, as one can see different output patterns are created. From left to right, the theta(rotation) is changed from 0 to 135 degrees. From top to down, the sigma is changed from 16 to 40. The kernel size which is used in this experiment is 40. (Source of input image: own images).</p>
Full article ">Figure 10
<p>Model IV, it is containing three sections: (<b>A</b>) Feature extraction based on ResNet101; (<b>B</b>) Feature extraction layers; (<b>C</b>) Output layer.</p>
Full article ">Figure 11
<p>Model V does integrate two of the previous models, namely Model III and Model IV. First, it does classify the document image separately by the two integrated models and then finally concatenates and calculates maximum probabilities to determine the most probable class of the document image.</p>
Full article ">Figure 12
<p>Confusion matrix of the performance results of model III for 1600 test data from the dataset Haley et al. [<a href="#B59-sensors-21-06763" class="html-bibr">59</a>]. (Source: our own images).</p>
Full article ">Figure 13
<p>Confusion matrix of the performance results of model IV for 1600 test data from the dataset Haley et al. [<a href="#B59-sensors-21-06763" class="html-bibr">59</a>]. (Source: our own images).</p>
Full article ">Figure 14
<p>Confusion matrix of the performance results of model IV for 1600 test data from the dataset Haley et al. [<a href="#B59-sensors-21-06763" class="html-bibr">59</a>]. (Source: our own images).</p>
Full article ">Figure 15
<p>The effect of different artifacts on our best classifier’s accuracy (Model V). The class value shows the amount of injected distorting artifacts in the original clean image based on the parameter sets presented in <a href="#sensors-21-06763-t004" class="html-table">Table 4</a>. Scenarios: (<b>a</b>) Effect of Gaussian noise injection; (<b>b</b>) Effect of contrast change injection; (<b>c</b>) effect of brightness change injection; (<b>d</b>) effect of focus blur injection; (<b>e</b>) effect of motion blur injection; and (<b>f</b>) Effect of combined artifacts injection.</p>
Full article ">Figure 16
<p>The effect of different artifacts on our best classifier’s accuracy (Model V) and a comparison with other related works. The class value (on the x-axis) shows the amount of injected distorting artifacts (i.e., distortion level) in the original clean image based on the parameter sets presented in <a href="#sensors-21-06763-t004" class="html-table">Table 4</a>. Three scenarios are considered: (<b>a</b>) Effect of Gaussian noise injection; (<b>b</b>) effect of focus blur injection; and (<b>c</b>) Effect of combined artifacts injection.</p>
Full article ">

Review

Jump to: Research

38 pages, 8501 KiB  
Review
A Comprehensive “Real-World Constraints”-Aware Requirements Engineering Related Assessment and a Critical State-of-the-Art Review of the Monitoring of Humans in Bed
by Kyandoghere Kyamakya, Vahid Tavakkoli, Simon McClatchie, Maximilian Arbeiter and Bart G. Scholte van Mast
Sensors 2022, 22(16), 6279; https://doi.org/10.3390/s22166279 - 21 Aug 2022
Cited by 1 | Viewed by 2808
Abstract
Currently, abnormality detection and/or prediction is a very hot topic. In this paper, we addressed it in the frame of activity monitoring of a human in bed. This paper presents a comprehensive formulation of a requirements engineering dossier for a monitoring system of [...] Read more.
Currently, abnormality detection and/or prediction is a very hot topic. In this paper, we addressed it in the frame of activity monitoring of a human in bed. This paper presents a comprehensive formulation of a requirements engineering dossier for a monitoring system of a “human in bed” for abnormal behavior detection and forecasting. Hereby, practical and real-world constraints and concerns were identified and taken into consideration in the requirements dossier. A comprehensive and holistic discussion of the anomaly concept was extensively conducted and contributed to laying the ground for a realistic specifications book of the anomaly detection system. Some systems engineering relevant issues were also briefly addressed, e.g., verification and validation. A structured critical review of the relevant literature led to identifying four major approaches of interest. These four approaches were evaluated from the perspective of the requirements dossier. It was thereby clearly demonstrated that the approach integrating graph networks and advanced deep-learning schemes (Graph-DL) is the one capable of fully fulfilling the challenging issues expressed in the real-world conditions aware specification book. Nevertheless, to meet immediate market needs, systems based on advanced statistical methods, after a series of adaptations, already ensure and satisfy the important requirements related to, e.g., low cost, solid data security and a fully embedded and self-sufficient implementation. To conclude, some recommendations regarding system architecture and overall systems engineering were formulated. Full article
(This article belongs to the Special Issue Sensor Intelligence through Neurocomputing)
Show Figures

Figure 1

Figure 1
<p>Human in bed monitoring through signals generated by sensors placed under the bed. Under each of the four bed legs, two sensors are placed: one weight sensor and one motion sensor. Thus, eight sensor measurement values are generated continuously over time for further processing by the anomaly detection intelligent system [<a href="#B25-sensors-22-06279" class="html-bibr">25</a>].</p>
Full article ">Figure 2
<p>There is a huge variety of possible static and/or dynamical activities of a human in bed, which are monitored through a sensor system such as the one presented in <a href="#sensors-22-06279-f001" class="html-fig">Figure 1</a>. The intelligent system to process the sensor data is capable of detecting and, eventually, also predict normal activities and abnormal ones. (Source of the different image parts: Freepik).</p>
Full article ">Figure 3
<p>Presentation of the four major data-processing levels that were used in the comprehensive ontological framework for defining the general “anomaly” concept.</p>
Full article ">Figure 4
<p>A possible and useful comprehensive structuring of the time dimension into regions and sub-regions. The use-case engineer shall fix meaningful durations/lengths of the different sub-regions.</p>
Full article ">Figure 5
<p>For illustration, Toy Examples—A comparison of “Conventional Anomaly Detection” (see part (<b>a</b>)) and “Graph Anomaly Detection” (see part (<b>b</b>)). Apart from anomalies shown in part (<b>b</b>) of the figure, graph anomaly detection also identifies graph-level anomalies.</p>
Full article ">
35 pages, 2347 KiB  
Review
From Fully Physical to Virtual Sensing for Water Quality Assessment: A Comprehensive Review of the Relevant State-of-the-Art
by Thulane Paepae, Pitshou N. Bokoro and Kyandoghere Kyamakya
Sensors 2021, 21(21), 6971; https://doi.org/10.3390/s21216971 - 20 Oct 2021
Cited by 51 | Viewed by 6484
Abstract
Rapid urbanization, industrial development, and climate change have resulted in water pollution and in the quality deterioration of surface and groundwater at an alarming rate, deeming its quick, accurate, and inexpensive detection imperative. Despite the latest developments in sensor technologies, real-time determination of [...] Read more.
Rapid urbanization, industrial development, and climate change have resulted in water pollution and in the quality deterioration of surface and groundwater at an alarming rate, deeming its quick, accurate, and inexpensive detection imperative. Despite the latest developments in sensor technologies, real-time determination of certain parameters is not easy or uneconomical. In such cases, the use of data-derived virtual sensors can be an effective alternative. In this paper, the feasibility of virtual sensing for water quality assessment is reviewed. The review focuses on the overview of key water quality parameters for a particular use case and the development of the corresponding cost estimates for their monitoring. The review further evaluates the current state-of-the-art in terms of the modeling approaches used, parameters studied, and whether the inputs were pre-processed by interrogating relevant literature published between 2001 and 2021. The review identified artificial neural networks, random forest, and multiple linear regression as dominant machine learning techniques used for developing inferential models. The survey also highlights the need for a comprehensive virtual sensing system in an internet of things environment. Thus, the review formulates the specification book for the advanced water quality assessment process (that involves a virtual sensing module) that can enable near real-time monitoring of water quality. Full article
(This article belongs to the Special Issue Sensor Intelligence through Neurocomputing)
Show Figures

Figure 1

Figure 1
<p>Cost categories for conducting a water quality test.</p>
Full article ">Figure 2
<p>(<b>a</b>) a virtual sensor based entirely on physical sensors; (<b>b</b>) a virtual sensor based only on another virtual sensor; (<b>c</b>) a virtual sensor based on both virtual and physical sensors [<a href="#B75-sensors-21-06971" class="html-bibr">75</a>].</p>
Full article ">Figure 3
<p>An overview of the typical steps undertaken in developing data-derived virtual sensors [<a href="#B25-sensors-21-06971" class="html-bibr">25</a>,<a href="#B30-sensors-21-06971" class="html-bibr">30</a>].</p>
Full article ">Figure 4
<p>Techniques that were used the most in the papers reviewed.</p>
Full article ">Figure 5
<p>(<b>a</b>) Commonly used input parameters; (<b>b</b>) Commonly predicted parameters.</p>
Full article ">Figure 6
<p>A virtual sensing architecture in an IoT environment [<a href="#B15-sensors-21-06971" class="html-bibr">15</a>,<a href="#B131-sensors-21-06971" class="html-bibr">131</a>].</p>
Full article ">
Back to TopTop