Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
sensors-logo

Journal Browser

Journal Browser

Human Activity Recognition in Smart Sensing Environment

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Wearables".

Deadline for manuscript submissions: 31 December 2024 | Viewed by 36092

Special Issue Editors


E-Mail Website
Guest Editor
Department of Informatics, Systems and Communication, University of Milano-Bicocca, 20126 Milan, Italy
Interests: software architecture; mobile-based systems; m-health; e-health; self-healing; self-repairing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Informatics, University of Milano Bicocca, 20125 Milano, Italy
Interests: software architecture; self-healing; self-repairing; cloud computing; monitoring
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Research regarding smart sensing environments has been increasing in prevalence in recent years due to the availability of increasingly high-performance sensors. Smart sensing environments that can detect activities performed by users can contribute significantly to an individual's well-being, both mental (because their needs are automatically met) and physical (because their health can be monitored constantly and in real time).

Human Activity Recognition (HAR) is a field of research that defines and experiments with approaches in order to recognize human activities. Usually, such recognition is performed by exploiting data from sensors that may be available in the environment (also known as environmental sensors) or that are directly on the subject (also known as wearable sensors). The most common kind of data obtained from environmental sensors are images and videos from cameras; however, other environmental sensors are often employed, including, but not limited to: temperature, humidity, pressure, audio, and vibration sensors. On the other hand, the most commonly used wearable sensors are accelerometers, followed by magnetometers and gyroscopes, which today are commonly found in smartphones (which can be considered as wearables) and proper wearable devices such as smartwatches and fitness bands.

The aim of this Special Issue, entitled “Human Activity Recognition in Smart Sensing Environment”, is to attract high-quality, innovative, and original papers related to the field of exploiting sensor data in order to perform HAR-related tasks, regardless of the nature of the sensors themselves, which may be environmental, wearable, or a combination of the two.

Topics of interests include, but are not limited to, the following:

  • Sensor data fusion;
  • Smartphone sensors;
  • Wearable sensors;
  • Human Activity Recognition (HAR);
  • Smart environments;
  • Ambient intelligence;
  • Ambient sensing;
  • Ambient assisted living;
  • Machine and deep learning solutions exploiting sensor data in HAR.

Dr. Daniela Micucci
Dr. Marco Mobilio
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • human activity recognition
  • machine learning
  • deep learning
  • environmental sensors
  • wearable sensors
  • ambient sensing
  • ambient assisted living

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (16 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

18 pages, 18009 KiB  
Article
Position-Aware Indoor Human Activity Recognition Using Multisensors Embedded in Smartphones
by Xiaoqing Wang, Yue Wang and Jiaxuan Wu
Sensors 2024, 24(11), 3367; https://doi.org/10.3390/s24113367 - 24 May 2024
Cited by 1 | Viewed by 854
Abstract
Composite indoor human activity recognition is very important in elderly health monitoring and is more difficult than identifying individual human movements. This article proposes a sensor-based human indoor activity recognition method that integrates indoor positioning. Convolutional neural networks are used to extract spatial [...] Read more.
Composite indoor human activity recognition is very important in elderly health monitoring and is more difficult than identifying individual human movements. This article proposes a sensor-based human indoor activity recognition method that integrates indoor positioning. Convolutional neural networks are used to extract spatial information contained in geomagnetic sensors and ambient light sensors, while transform encoders are used to extract temporal motion features collected by gyroscopes and accelerometers. We established an indoor activity recognition model with a multimodal feature fusion structure. In order to explore the possibility of using only smartphones to complete the above tasks, we collected and established a multisensor indoor activity dataset. Extensive experiments verified the effectiveness of the proposed method. Compared with algorithms that do not consider the location information, our method has a 13.65% improvement in recognition accuracy. Full article
(This article belongs to the Special Issue Human Activity Recognition in Smart Sensing Environment)
Show Figures

Figure 1

Figure 1
<p>Apartment layout.</p>
Full article ">Figure 2
<p>Sensor data collected under different activities.</p>
Full article ">Figure 3
<p>Overview of the LM-IAR workflow.</p>
Full article ">Figure 4
<p>Geomagnetic data collected from different rooms.</p>
Full article ">Figure 5
<p>Data formation of LFN.</p>
Full article ">Figure 6
<p>LFN structure.</p>
Full article ">Figure 7
<p>MFN structure.</p>
Full article ">Figure 8
<p>Modal fusion layer (MFL) structure.</p>
Full article ">Figure 9
<p>Confusion matrixes of LFN.</p>
Full article ">Figure 10
<p>Confusion matrixes of LM-IAR and M-IAR.</p>
Full article ">Figure 11
<p>Recognition accuracy with different length of data segmentation.</p>
Full article ">Figure 12
<p>Inference time with different length of data segmentation.</p>
Full article ">Figure 13
<p>Accuracy with experiment participant.</p>
Full article ">
23 pages, 944 KiB  
Article
Unobtrusive Cognitive Assessment in Smart-Homes: Leveraging Visual Encoding and Synthetic Movement Traces Data Mining
by Samaneh Zolfaghari, Annica Kristoffersson, Mia Folke, Maria Lindén and Daniele Riboni
Sensors 2024, 24(5), 1381; https://doi.org/10.3390/s24051381 - 21 Feb 2024
Viewed by 1056
Abstract
The ubiquity of sensors in smart-homes facilitates the support of independent living for older adults and enables cognitive assessment. Notably, there has been a growing interest in utilizing movement traces for identifying signs of cognitive impairment in recent years. In this study, we [...] Read more.
The ubiquity of sensors in smart-homes facilitates the support of independent living for older adults and enables cognitive assessment. Notably, there has been a growing interest in utilizing movement traces for identifying signs of cognitive impairment in recent years. In this study, we introduce an innovative approach to identify abnormal indoor movement patterns that may signal cognitive decline. This is achieved through the non-intrusive integration of smart-home sensors, including passive infrared sensors and sensors embedded in everyday objects. The methodology involves visualizing user locomotion traces and discerning interactions with objects on a floor plan representation of the smart-home, and employing different image descriptor features designed for image analysis tasks and synthetic minority oversampling techniques to enhance the methodology. This approach distinguishes itself by its flexibility in effortlessly incorporating additional features through sensor data. A comprehensive analysis, conducted with a substantial dataset obtained from a real smart-home, involving 99 seniors, including those with cognitive diseases, reveals the effectiveness of the proposed functional prototype of the system architecture. The results validate the system’s efficacy in accurately discerning the cognitive status of seniors, achieving a macro-averaged F1-score of 72.22% for the two targeted categories: cognitively healthy and people with dementia. Furthermore, through experimental comparison, our system demonstrates superior performance compared with state-of-the-art methods. Full article
(This article belongs to the Special Issue Human Activity Recognition in Smart Sensing Environment)
Show Figures

Figure 1

Figure 1
<p>Example of movement patterns based on the Martino-Saltzman model, adopted from [<a href="#B18-sensors-24-01381" class="html-bibr">18</a>]. In (<b>a</b>), all four trajectories seem abnormal without context. Solid lines resemble direct and random walks, the dashed line indicates pacing, and the dotted line suggests a lapping movement. Within the home floor plan context (<b>b</b>), only one trajectory is likely abnormal. Green solid lines represent the shortest paths, the back-and-forth movement in the kitchen (green dashed line) might be a normal table-setting activity, while the repeated looping movement in the living room (red dotted line) could indicate an abnormal lapping pattern, possibly linked to cognitive issues.</p>
Full article ">Figure 2
<p>Overview of the functional prototype of the system architecture.</p>
Full article ">Figure 3
<p>(<b>a</b>) Illustration of an encoded image showcasing directional changes. Dashed boxes in black and white highlight regions of interest, each zoomed in on the right. (<b>b</b>) Zoomed views reveal details within the boxed areas. <span class="html-italic">White points</span> indicate right-side directional changes, <span class="html-italic">Brown points</span> indicate left-side directional changes, and <span class="html-italic">Vivid violet points</span> indicate movement towards the central position. The shade level of the <span class="html-italic">Blue line</span> depends on the speed of the movement.</p>
Full article ">Figure 4
<p>Macro-averaged <math display="inline"><semantics> <msub> <mi>F</mi> <mn>1</mn> </msub> </semantics></math>-scores (<b>a</b>) and weighted-averaged <math display="inline"><semantics> <msub> <mi>F</mi> <mn>1</mn> </msub> </semantics></math>-scores (<b>b</b>) for cognitive assessment by employing traditional models and LOPO cross-validation, taking into account both CRF and SURF.</p>
Full article ">Figure 5
<p>Macro-averaged (<b>a</b>) and weighted-averaged measures (<b>b</b>) for long-term cognitive assessment.</p>
Full article ">Figure 5 Cont.
<p>Macro-averaged (<b>a</b>) and weighted-averaged measures (<b>b</b>) for long-term cognitive assessment.</p>
Full article ">
17 pages, 1554 KiB  
Article
A Hybrid Protection Scheme for the Gait Analysis in Early Dementia Recognition
by Francesco Castro, Donato Impedovo and Giuseppe Pirlo
Sensors 2024, 24(1), 24; https://doi.org/10.3390/s24010024 - 19 Dec 2023
Cited by 1 | Viewed by 1602
Abstract
Human activity recognition (HAR) through gait analysis is a very promising research area for early detection of neurodegenerative diseases because gait abnormalities are typical symptoms of some neurodegenerative diseases, such as early dementia. While working with such biometric data, the performance parameters must [...] Read more.
Human activity recognition (HAR) through gait analysis is a very promising research area for early detection of neurodegenerative diseases because gait abnormalities are typical symptoms of some neurodegenerative diseases, such as early dementia. While working with such biometric data, the performance parameters must be considered along with privacy and security issues. In other words, such biometric data should be processed under specific security and privacy requirements. This work proposes an innovative hybrid protection scheme combining a partially homomorphic encryption scheme and a cancelable biometric technique based on random projection to protect gait features, ensuring patient privacy according to ISO/IEC 24745. The proposed hybrid protection scheme has been implemented along a long short-term memory (LSTM) neural network to realize a secure early dementia diagnosis system. The proposed protection scheme is scalable and implementable with any type of neural network because it is independent of the network’s architecture. The conducted experiments demonstrate that the proposed protection scheme enables a high trade-off between safety and performance. The accuracy degradation is at most 1.20% compared with the early dementia recognition system without the protection scheme. Moreover, security and computational analyses of the proposed scheme have been conducted and reported. Full article
(This article belongs to the Special Issue Human Activity Recognition in Smart Sensing Environment)
Show Figures

Figure 1

Figure 1
<p>Overall model of an early dementia recognition system with the implementation of the proposed hybrid protection scheme.</p>
Full article ">Figure 2
<p>Random projection approach.</p>
Full article ">Figure 3
<p>LSTM model architecture.</p>
Full article ">Figure 4
<p>Setup of the data collection process.</p>
Full article ">Figure 5
<p>Video capture and preprocessing.</p>
Full article ">Figure 6
<p>Benchmark of the system accuracy obtained with and without feature protection in the case of walking from left to right.</p>
Full article ">Figure 7
<p>Benchmark of the system accuracy obtained with and without feature protection in the case of walking from right to left.</p>
Full article ">Figure 8
<p>Benchmark of the system accuracy obtained with and without feature protection in the case of walking in both directions.</p>
Full article ">
21 pages, 7596 KiB  
Article
Visible Light Communications-Based Assistance System for the Blind and Visually Impaired: Design, Implementation, and Intensive Experimental Evaluation in a Real-Life Situation
by Alin-Mihai Căilean, Sebastian-Andrei Avătămăniței, Cătălin Beguni, Eduard Zadobrischi, Mihai Dimian and Valentin Popa
Sensors 2023, 23(23), 9406; https://doi.org/10.3390/s23239406 - 25 Nov 2023
Cited by 3 | Viewed by 1617
Abstract
Severe visual impairment and blindness significantly affect a person’s quality of life, leading sometimes to social anxiety. Nevertheless, instead of concentrating on a person’s inability, we could focus on their capacities and on their other senses, which in many cases are more developed. [...] Read more.
Severe visual impairment and blindness significantly affect a person’s quality of life, leading sometimes to social anxiety. Nevertheless, instead of concentrating on a person’s inability, we could focus on their capacities and on their other senses, which in many cases are more developed. On the other hand, the technical evolution that we are witnessing is able to provide practical means that can reduce the effects that blindness and severe visual impairment have on a person’s life. In this context, this article proposes a novel wearable solution that has the potential to significantly improve blind person’s quality of life by providing personal assistance with the help of Visible Light Communications (VLC) technology. To prevent the wearable device from drawing attention and to not further emphasize the user’s deficiency, the prototype has been integrated into a smart backpack that has multiple functions, from localization to obstacle detection. To demonstrate the viability of the concept, the prototype has been evaluated in a complex scenario where it is used to receive the location of a certain object and to safely travel towards it. The experimental results have: i. confirmed the prototype’s ability to receive data at a Bit-Error Rate (BER) lower than 10−7; ii. established the prototype’s ability to provide support for a 3 m radius around a standard 65 × 65 cm luminaire; iii. demonstrated the concept’s compatibility with light dimming in the 1–99% interval while maintaining the low BER; and, most importantly, iv. proved that the use of the concept can enable a person to obtain information and guidance, enabling safer and faster way of traveling to a certain unknown location. As far as we know, this work is the first one to report the implementation and the experimental evaluation of such a concept. Full article
(This article belongs to the Special Issue Human Activity Recognition in Smart Sensing Environment)
Show Figures

Figure 1

Figure 1
<p>Schematic representation of the VLC-based smart backpack for blind users’ assistance.</p>
Full article ">Figure 2
<p>Schematic of the smart backpack visible light communications component.</p>
Full article ">Figure 3
<p>Visible light communications-based smart backpack for blind and severely visually impaired persons’ assistance: (<b>a</b>) Front and lateral side view; (<b>b</b>) Backside view.</p>
Full article ">Figure 4
<p>Visible light communication testing scenario: The indoor lighting system uses VLC to provide the user with information concerning a certain point of interest. After leaving the area, the user is no longer in VLC coverage and has to rely on the embedded obstacle localization sensors and on the haptic systems to travel the distance until the point of interest.</p>
Full article ">Figure 5
<p>Illustration of signal processing at the VLC receiver level: Channel 1 (yellow) represents the output of the optical receiver; Channel 2 (cyan) represents the signal after filtering and amplification; Channel 4 (green) shows the digital signal that is used by the microcontroller to extract the binary data. The figure also depicts the use of the VPPM modulation in a 30% duty cycle example, the light dimming principles, and provides a comparison with the traditional Manchester coding.</p>
Full article ">Figure 6
<p>Example of smart backpack for blind persons’ assistance experimental utility: the prototype’s obstacle sensors are able to identify and inform the user that: (<b>a</b>) an open glass door is obstructing the path; (<b>b</b>) a dispenser is in the path and an open door is found on the user’s right side.</p>
Full article ">Figure 7
<p>Experimental results showing the path taken by four blindfolded users, as they navigated from point A to point B while utilizing the VLC-based smart backpack. The results revealed a notable enhancement in the users’ path and a significant reduction in their travel time: (<b>a</b>) With backpack assistance; (<b>b</b>) Without any kind of assistance.</p>
Full article ">
20 pages, 5594 KiB  
Article
Intelligent ADL Recognition via IoT-Based Multimodal Deep Learning Framework
by Madiha Javeed, Naif Al Mudawi, Abdulwahab Alazeb, Sultan Almakdi, Saud S. Alotaibi, Samia Allaoua Chelloug and Ahmad Jalal
Sensors 2023, 23(18), 7927; https://doi.org/10.3390/s23187927 - 16 Sep 2023
Viewed by 2317
Abstract
Smart home monitoring systems via internet of things (IoT) are required for taking care of elders at home. They provide the flexibility of monitoring elders remotely for their families and caregivers. Activities of daily living are an efficient way to effectively monitor elderly [...] Read more.
Smart home monitoring systems via internet of things (IoT) are required for taking care of elders at home. They provide the flexibility of monitoring elders remotely for their families and caregivers. Activities of daily living are an efficient way to effectively monitor elderly people at home and patients at caregiving facilities. The monitoring of such actions depends largely on IoT-based devices, either wireless or installed at different places. This paper proposes an effective and robust layered architecture using multisensory devices to recognize the activities of daily living from anywhere. Multimodality refers to the sensory devices of multiple types working together to achieve the objective of remote monitoring. Therefore, the proposed multimodal-based approach includes IoT devices, such as wearable inertial sensors and videos recorded during daily routines, fused together. The data from these multi-sensors have to be processed through a pre-processing layer through different stages, such as data filtration, segmentation, landmark detection, and 2D stick model. In next layer called the features processing, we have extracted, fused, and optimized different features from multimodal sensors. The final layer, called classification, has been utilized to recognize the activities of daily living via a deep learning technique known as convolutional neural network. It is observed from the proposed IoT-based multimodal layered system’s results that an acceptable mean accuracy rate of 84.14% has been achieved. Full article
(This article belongs to the Special Issue Human Activity Recognition in Smart Sensing Environment)
Show Figures

Figure 1

Figure 1
<p>The architecture diagram for multimodal IoT-based deep learning framework via ADL recognition.</p>
Full article ">Figure 2
<p>Sample signals after filters applied for motion sensor data.</p>
Full article ">Figure 3
<p>Detailed view of data segmentation applied over the inertial signal has been presented using multiple colors in the figure. The red dotted box shows single segment of data.</p>
Full article ">Figure 4
<p>(<b>a</b>) Real video frame and (<b>b</b>) extracted human figure after background extraction for bending activity in Berkeley-MHAD dataset.</p>
Full article ">Figure 5
<p>(<b>a</b>) Human silhouette (<b>b</b>) 2D stick model, where each red dot represents the body point detected, green lines show the upper body skeleton, and orange lines give the lower body skeleton.</p>
Full article ">Figure 6
<p>Extracted LPCCs result for the Jumping Jacks ADL over Berkeley-MHAD dataset.</p>
Full article ">Figure 7
<p>Upward motion direction flow in Jumping in Place ADL.</p>
Full article ">Figure 8
<p>Features optimization via genetic algorithm explained through a detailed view.</p>
Full article ">Figure 9
<p>Proposed CNN model for multimodal IoT-based ADL recognition over Berkeley-MHAD.</p>
Full article ">Figure 10
<p>Sample frame sequences from the Berkeley-MHAD [<a href="#B22-sensors-23-07927" class="html-bibr">22</a>] dataset.</p>
Full article ">Figure 11
<p>Sample frame sequences from Opportunity++ [<a href="#B21-sensors-23-07927" class="html-bibr">21</a>] dataset.</p>
Full article ">Figure 12
<p>Examples of problematic ADL activities over Berkeley-MHAD, where red dotted circles point out the skeleton extraction problems.</p>
Full article ">
30 pages, 10215 KiB  
Article
Intelligent Localization and Deep Human Activity Recognition through IoT Devices
by Abdulwahab Alazeb, Usman Azmat, Naif Al Mudawi, Abdullah Alshahrani, Saud S. Alotaibi, Nouf Abdullah Almujally and Ahmad Jalal
Sensors 2023, 23(17), 7363; https://doi.org/10.3390/s23177363 - 23 Aug 2023
Cited by 15 | Viewed by 2201
Abstract
Ubiquitous computing has been a green research area that has managed to attract and sustain the attention of researchers for some time now. As ubiquitous computing applications, human activity recognition and localization have also been popularly worked on. These applications are used in [...] Read more.
Ubiquitous computing has been a green research area that has managed to attract and sustain the attention of researchers for some time now. As ubiquitous computing applications, human activity recognition and localization have also been popularly worked on. These applications are used in healthcare monitoring, behavior analysis, personal safety, and entertainment. A robust model has been proposed in this article that works over IoT data extracted from smartphone and smartwatch sensors to recognize the activities performed by the user and, in the meantime, classify the location at which the human performed that particular activity. The system starts by denoising the input signal using a second-order Butterworth filter and then uses a hamming window to divide the signal into small data chunks. Multiple stacked windows are generated using three windows per stack, which, in turn, prove helpful in producing more reliable features. The stacked data are then transferred to two parallel feature extraction blocks, i.e., human activity recognition and human localization. The respective features are extracted for both modules that reinforce the system’s accuracy. A recursive feature elimination is applied to the features of both categories independently to select the most informative ones among them. After the feature selection, a genetic algorithm is used to generate ten different generations of each feature vector for data augmentation purposes, which directly impacts the system’s performance. Finally, a deep neural decision forest is trained for classifying the activity and the subject’s location while working on both of these attributes in parallel. For the evaluation and testing of the proposed system, two openly accessible benchmark datasets, the ExtraSensory dataset and the Sussex-Huawei Locomotion dataset, were used. The system outperformed the available state-of-the-art systems by recognizing human activities with an accuracy of 88.25% and classifying the location with an accuracy of 90.63% over the ExtraSensory dataset, while, for the Sussex-Huawei Locomotion dataset, the respective results were 96.00% and 90.50% accurate. Full article
(This article belongs to the Special Issue Human Activity Recognition in Smart Sensing Environment)
Show Figures

Figure 1

Figure 1
<p>The architecture of the proposed system for human activity recognition and localization.</p>
Full article ">Figure 2
<p>Input signal pre-processing using Butterworth filter.</p>
Full article ">Figure 3
<p>Maximum Lyapunov exponent for various activities of the ExtraSensory dataset.</p>
Full article ">Figure 4
<p>MFCC plot for (<b>a</b>) strolling, (<b>b</b>) sitting, and (<b>c</b>) bicycling from the ExtraSensory dataset, and (<b>d</b>) running, (<b>e</b>) walking, and (<b>f</b>) standing from the Sussex-Huawei Locomotion dataset.</p>
Full article ">Figure 5
<p>Fractal dimension for (<b>a</b>) sitting and (<b>b</b>) lying down from ExtraSensory dataset, and (<b>c</b>) sitting and (<b>d</b>) running from Sussex-Huawei Locomotion dataset.</p>
Full article ">Figure 6
<p>Embedding dimension for (<b>a</b>) standing, strolling, and running for the ExtraSensory dataset, and (<b>b</b>) standing, walking, and running for the Sussex-Huawei Locomotion dataset.</p>
Full article ">Figure 7
<p>Steps for (<b>a</b>) indoor and (<b>b</b>) outdoor for the ExtraSensory dataset, and (<b>c</b>) indoor and (<b>d</b>) outdoor for the Sussex-Huawei Locomotion dataset.</p>
Full article ">Figure 8
<p>Step length for (<b>a</b>) outdoor for the ExtraSensory dataset, and (<b>b</b>) indoor for the Sussex-Huawei Locomotion dataset.</p>
Full article ">Figure 9
<p>Normalized heading angles for various locations of the ExtraSensory dataset.</p>
Full article ">Figure 10
<p>Normalized heading angles for various locations of the Sussex-Huawei Locomotion dataset.</p>
Full article ">Figure 11
<p>MFCCs for (<b>a</b>) at-home and (<b>b</b>) in-class locations from the ExtraSensory dataset, and (<b>c</b>) in-train and (<b>d</b>) in-car locations from the Sussex-Huawei Locomotion dataset.</p>
Full article ">Figure 12
<p>Block diagram for recursive feature elimination.</p>
Full article ">Figure 13
<p>Block diagram for the genetic algorithm as data augmenter.</p>
Full article ">Figure 14
<p>Block diagram for a deep neural decision forest classifier.</p>
Full article ">Figure 15
<p>Runtime and memory usage plot for the training of a DNDF for activity recognition over the ExtraSensory dataset.</p>
Full article ">Figure 16
<p>Runtime and memory usage plot for the training of a DNDF for localization over the ExtraSensory dataset.</p>
Full article ">Figure 17
<p>Runtime and memory usage plot for the training of a DNDF for activity recognition over SHL dataset.</p>
Full article ">Figure 18
<p>Runtime and memory usage plot for the training of a DNDF for localization over SHL dataset.</p>
Full article ">Figure 19
<p>Comparison of the effect of window size on the linear separability of the features.</p>
Full article ">Figure 19 Cont.
<p>Comparison of the effect of window size on the linear separability of the features.</p>
Full article ">
16 pages, 3400 KiB  
Article
Vision-Based Recognition of Human Motion Intent during Staircase Approaching
by Md Rafi Islam, Md Rejwanul Haque, Masudul H. Imtiaz, Xiangrong Shen and Edward Sazonov
Sensors 2023, 23(11), 5355; https://doi.org/10.3390/s23115355 - 5 Jun 2023
Cited by 3 | Viewed by 2043
Abstract
Walking in real-world environments involves constant decision-making, e.g., when approaching a staircase, an individual decides whether to engage (climbing the stairs) or avoid. For the control of assistive robots (e.g., robotic lower-limb prostheses), recognizing such motion intent is an important but challenging task, [...] Read more.
Walking in real-world environments involves constant decision-making, e.g., when approaching a staircase, an individual decides whether to engage (climbing the stairs) or avoid. For the control of assistive robots (e.g., robotic lower-limb prostheses), recognizing such motion intent is an important but challenging task, primarily due to the lack of available information. This paper presents a novel vision-based method to recognize an individual’s motion intent when approaching a staircase before the potential transition of motion mode (walking to stair climbing) occurs. Leveraging the egocentric images from a head-mounted camera, the authors trained a YOLOv5 object detection model to detect staircases. Subsequently, an AdaBoost and gradient boost (GB) classifier was developed to recognize the individual’s intention of engaging or avoiding the upcoming stairway. This novel method has been demonstrated to provide reliable (97.69%) recognition at least 2 steps before the potential mode transition, which is expected to provide ample time for the controller mode transition in an assistive robot in real-world use. Full article
(This article belongs to the Special Issue Human Activity Recognition in Smart Sensing Environment)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The proposed system comprises two parts. (<b>a</b>) Staircase Detection: The system takes egocentric video from a head-mounted camera as input. Then, YOLOv5 model is utilized to detect staircases in the videos and generate and store bounding box (BB) information around them. (<b>b</b>) Intent Prediction: The change of BB properties (e.g., width, area, etc.) with time is treated as signals. Then, features (e.g., mean, variance, etc.) are extracted from these signals. Finally, AdaBoost/Gradient Boost (GB) algorithm is employed to predict the intention (climbing or avoiding the staircase) based on the extracted features. An illustration of the system output is provided using the BB width vs. time signal.</p>
Full article ">Figure 2
<p>Camera setup for egocentric video capture: (<b>a</b>) front view, (<b>b</b>) side view, and (<b>c</b>) two-point perspective view.</p>
Full article ">Figure 3
<p>Plot of raw and filtered normalized area of the bounding box vs. frame number. The presence of green dots signifies the conclusion of the signals.</p>
Full article ">Figure 4
<p>Box plot of features: (<b>a</b>) max–min peak difference of the bounding box area; (<b>b</b>) max–min peak difference of the bounding box width; (<b>c</b>) max–min peak difference of the bounding box X coordinate; (<b>d</b>) max–min peak difference of the bounding box Y coordinate; (<b>e</b>) mean of the bounding box area; (<b>f</b>) mean of the bounding box width; (<b>g</b>) mean of the bounding box X coordinate; (<b>h</b>) mean of the bounding box Y coordinate.</p>
Full article ">Figure 5
<p>Illustration of calculating intention classification time before stair climb. (<b>a</b>) Frame at which intention was detected. (<b>b</b>) Frame at which first stepped on the staircase.</p>
Full article ">Figure 6
<p>GB classifiers output on a staircase video.</p>
Full article ">
36 pages, 8924 KiB  
Article
Automated Implementation of the Edinburgh Visual Gait Score (EVGS) Using OpenPose and Handheld Smartphone Video
by Shri Harini Ramesh, Edward D. Lemaire, Albert Tu, Kevin Cheung and Natalie Baddour
Sensors 2023, 23(10), 4839; https://doi.org/10.3390/s23104839 - 17 May 2023
Cited by 5 | Viewed by 3369
Abstract
Recent advancements in computing and artificial intelligence (AI) make it possible to quantitatively evaluate human movement using digital video, thereby opening the possibility of more accessible gait analysis. The Edinburgh Visual Gait Score (EVGS) is an effective tool for observational gait analysis, but [...] Read more.
Recent advancements in computing and artificial intelligence (AI) make it possible to quantitatively evaluate human movement using digital video, thereby opening the possibility of more accessible gait analysis. The Edinburgh Visual Gait Score (EVGS) is an effective tool for observational gait analysis, but human scoring of videos can take over 20 min and requires experienced observers. This research developed an algorithmic implementation of the EVGS from handheld smartphone video to enable automatic scoring. Participant walking was video recorded at 60 Hz using a smartphone, and body keypoints were identified using the OpenPose BODY25 pose estimation model. An algorithm was developed to identify foot events and strides, and EVGS parameters were determined at relevant gait events. Stride detection was accurate within two to five frames. The level of agreement between the algorithmic and human reviewer EVGS results was strong for 14 of 17 parameters, and the algorithmic EVGS results were highly correlated (r > 0.80, “r” represents the Pearson correlation coefficient) to the ground truth values for 8 of the 17 parameters. This approach could make gait analysis more accessible and cost-effective, particularly in areas without gait assessment expertise. These findings pave the way for future studies to explore the use of smartphone video and AI algorithms in remote gait analysis. Full article
(This article belongs to the Special Issue Human Activity Recognition in Smart Sensing Environment)
Show Figures

Figure 1

Figure 1
<p>OpenPose output demonstrating human pose estimation through keypoint detection.</p>
Full article ">Figure 2
<p>Identification of heel strike and toe off for right leg (<b>a</b>) midtoe position, (<b>b</b>) heel position. Solid black line represents actual foot event frame, and dashed red line is estimated foot event. FE = foot event.</p>
Full article ">Figure 2 Cont.
<p>Identification of heel strike and toe off for right leg (<b>a</b>) midtoe position, (<b>b</b>) heel position. Solid black line represents actual foot event frame, and dashed red line is estimated foot event. FE = foot event.</p>
Full article ">Figure 3
<p>Mid-midstance detection using toe distance. The dashed red line shows the algorithmically calculated mid-midstance, whereas the solid black line shows actual frame number of the mid-midstance.</p>
Full article ">Figure 4
<p>Distance between the toes during a walking trial: (<b>a</b>) heel strike and (<b>b</b>) mid-midstance. The red dashed line is algorithm’s estimated mid-midstance and heel strike. Solid black lines are ground truth for mid-midstance and heel strike.</p>
Full article ">Figure 5
<p>Flowchart of gait event identification.</p>
Full article ">Figure 6
<p>Geometric representation of the method used to calculate hip angle.</p>
Full article ">Figure 7
<p>Axes for EVGS computation.</p>
Full article ">Figure 8
<p>Geometric representation of the method used to calculate knee angle.</p>
Full article ">Figure 9
<p>Geometric representation of the method used to calculate ankle angle.</p>
Full article ">Figure 10
<p>Example where the algorithm detects additional mid-midstance (<b>a</b>) but fails to detect a mid-midstance (<b>b</b>) (shown by ellipse region). The red dashed line is algorithm’s estimated mid-midstance. Solid black lines are ground truth for mid-midstance.</p>
Full article ">Figure 11
<p>Reviewer correlation and correlation between the algorithm and the reviewers for coronal plane parameters.</p>
Full article ">Figure 12
<p>Reviewer correlation and correlation between the algorithm and reviewers for sagittal plane parameters.</p>
Full article ">Figure 13
<p>An example where OpenPose fails to recognize heel, ankle, big toe, and small toe accurately. Image without keypoints (<b>a</b>) and image with keypoints errors at the ankle and foot (<b>b</b>).</p>
Full article ">Figure 14
<p>(<b>a</b>) No clearance. (<b>b</b>) Reduced clearance.</p>
Full article ">
22 pages, 11919 KiB  
Article
Flight Controller as a Low-Cost IMU Sensor for Human Motion Measurement
by Artur Iluk
Sensors 2023, 23(4), 2342; https://doi.org/10.3390/s23042342 - 20 Feb 2023
Cited by 1 | Viewed by 3084
Abstract
Human motion analysis requires information about the position and orientation of different parts of the human body over time. Widely used are optical methods such as the VICON system and sets of wired and wireless IMU sensors to estimate absolute orientation angles of [...] Read more.
Human motion analysis requires information about the position and orientation of different parts of the human body over time. Widely used are optical methods such as the VICON system and sets of wired and wireless IMU sensors to estimate absolute orientation angles of extremities (Xsens). Both methods require expensive measurement devices and have disadvantages such as the limited rate of position and angle acquisition. In the paper, the adaptation of the drone flight controller was proposed as a low-cost and relatively high-performance device for the human body pose estimation and acceleration measurements. The test setup with the use of flight controllers was described and the efficiency of the flight controller sensor was compared with commercial sensors. The practical usability of sensors in human motion measurement was presented. The issues related to the dynamic response of IMU-based sensors during acceleration measurement were discussed. Full article
(This article belongs to the Special Issue Human Activity Recognition in Smart Sensing Environment)
Show Figures

Figure 1

Figure 1
<p>The set of sensors from the optical motion tracking system: VICON (<b>left</b>) and IMU-based sensors mounted on the human body (<b>right</b>).</p>
Full article ">Figure 2
<p>MTw Awinda sensor 47 × 30 × 13 mm, mass 16 g (<b>left</b>); and Noraxon Ultium Motion sensor 44.5 × 33 × 12.2 mm, mass 19g (<b>right</b>).</p>
Full article ">Figure 3
<p>RedShift Labs UM7, 28 × 28 × 11 mm, mass 7.5 g (<b>left</b>); and DFRobot SEN0386 sensor, 51.3 × 36 × 10 mm, mass 18g (<b>right</b>).</p>
Full article ">Figure 4
<p>Layout and connectivity ports of the flight controller.</p>
Full article ">Figure 5
<p>The SD card slot on the bottom side of the flight controller.</p>
Full article ">Figure 6
<p>View of the cabled sensor secured by a transparent shrink tube, with a 32 GB micro-SD card in the slot for local recording.</p>
Full article ">Figure 7
<p>The layout of the measurement system.</p>
Full article ">Figure 8
<p>Remote control receiver used to trigger the sensor system.</p>
Full article ">Figure 9
<p>The stages of recording synchronization.</p>
Full article ">Figure 10
<p>The example output of the synchronization procedure in Python.</p>
Full article ">Figure 11
<p>The reference Xsens MTi G-700 sensor (<b>left</b>); and the flight controller sensors for synchronization measurement (<b>right</b>).</p>
Full article ">Figure 12
<p>Raw measurement of impulse excitation (acceleration on the Z axis) of 5 flight controller sensors (FC1–FC5) before synchronization.</p>
Full article ">Figure 13
<p>The raw measurement of impulse excitation (acceleration in Z-axis) of 5 flight controller sensors (FC1–FC5) before synchronization. Magnifications of the first pulse (<b>a</b>), the middle pulse (<b>b</b>), and the last pulse (<b>c</b>).</p>
Full article ">Figure 14
<p>Measurement of impulse excitation of five flight controller sensors (FC1–FC5) after synchronization and reference signal measured with the Xsens MTi-G700 sensor (MTi).</p>
Full article ">Figure 15
<p>Measurement of impulse excitation by five flight controller sensors (FC1–FC5) after the synchronization and reference signal measured with the Xsens MTi-G700 sensor (MTi). Magnification of the initial (<b>a</b>), middle (<b>b</b>), and last pulse (<b>c</b>). ∆t: shift between the reference pulse and the FC pulse.</p>
Full article ">Figure 16
<p>Comparison of dynamic angle measurement using the flight controller sensor (FC) and reference Xsens MTi G-700 sensor (MTi).</p>
Full article ">Figure 17
<p>The angle deviation between the FC sensor and the reference MTi G-700 sensor. The red lines represent the standard deviation of the FC signal according to the reference MTi signal.</p>
Full article ">Figure 18
<p>The magnified view at the time of maximum error from <a href="#sensors-23-02342-f017" class="html-fig">Figure 17</a>, the signals measured by the reference sensor and the FC sensor (<b>top</b>) and error of estimation (<b>bottom</b>). The red dashed lines in the bottom view represent the standard deviation of the FC signal according to the reference MTi signal.</p>
Full article ">Figure 19
<p>The MTi G-700 sensor and the set of sensors connected to the palm: MTi-G700 sensor, Xsens Awinda wireless sensor, and flight controller sensor.</p>
Full article ">Figure 20
<p>Synchronized measurement of pitch angle: MTi: reference signal, Xsens MTi-G700 sensor; MTw: Xsens Awinda wireless sensor; FC: flight controller sensor.</p>
Full article ">Figure 21
<p>Synchronized measurement of the pitch angle, magnification of the single movement: MTi: reference signal, Xsens MTi-G700 sensor; MTw: Xsens Awinda wireless sensor; FC: flight controller sensor.</p>
Full article ">Figure 22
<p>Location of sensors on the human body (<b>left</b>), location of the sensor on the head (<b>middle</b>) and on the neck (<b>right</b>).</p>
Full article ">Figure 23
<p>Angles measured on the left foot during walking by the flight controller.</p>
Full article ">Figure 24
<p>Angles measured on the head during walking by the flight controller.</p>
Full article ">Figure 25
<p>Vertical component of acceleration of each sensor during the passage with bare feet.</p>
Full article ">Figure 26
<p>Vertical component of acceleration of each sensor during the passage in sport shoes.</p>
Full article ">Figure 27
<p>Vertical component of the head, neck, and tailbone during passage with bare feet.</p>
Full article ">Figure 28
<p>The vertical component of the acceleration of head, neck, and tailbone during the passage in sports shoes.</p>
Full article ">Figure 29
<p>The vertical component of the head acceleration during 10 subsequent steps of the single passage.</p>
Full article ">Figure 30
<p>The vertical component of the head acceleration during 10 subsequent steps of the single passage with bare feet.</p>
Full article ">Figure 31
<p>The vertical component of the head acceleration during 10 subsequent steps of the single passage in sports shoes.</p>
Full article ">Figure 32
<p>The vertical response of the sensors during the single impact of the bare right foot onto a hard surface.</p>
Full article ">
13 pages, 525 KiB  
Article
Incremental Learning of Human Activities in Smart Homes
by Sook-Ling Chua, Lee Kien Foo, Hans W. Guesgen and Stephen Marsland
Sensors 2022, 22(21), 8458; https://doi.org/10.3390/s22218458 - 3 Nov 2022
Cited by 4 | Viewed by 2139
Abstract
Sensor-based human activity recognition has been extensively studied. Systems learn from a set of training samples to classify actions into a pre-defined set of ground truth activities. However, human behaviours vary over time, and so a recognition system should ideally be able to [...] Read more.
Sensor-based human activity recognition has been extensively studied. Systems learn from a set of training samples to classify actions into a pre-defined set of ground truth activities. However, human behaviours vary over time, and so a recognition system should ideally be able to continuously learn and adapt, while retaining the knowledge of previously learned activities, and without failing to highlight novel, and therefore potentially risky, behaviours. In this paper, we propose a method based on compression that can incrementally learn new behaviours, while retaining prior knowledge. Evaluation was conducted on three publicly available smart home datasets. Full article
(This article belongs to the Special Issue Human Activity Recognition in Smart Sensing Environment)
Show Figures

Figure 1

Figure 1
<p>Summary of our proposed method.</p>
Full article ">Figure 2
<p>Illustration showing how novelty is detected by calculating the compression factor. (<b>a</b>) Similar time and location, different activity. (<b>b</b>) Similar location and activity, different time. (<b>c</b>) Similar location and activity, new time. (<b>d</b>) Similar time and activity, different location. For details, see the text.</p>
Full article ">Figure 3
<p>Implementation of the 3 models based on training and validation sets.</p>
Full article ">Figure 4
<p>Average recognition accuracy of the 3 models trained on a smaller training set.</p>
Full article ">
22 pages, 4385 KiB  
Article
Group Decision Making-Based Fusion for Human Activity Recognition in Body Sensor Networks
by Yiming Tian, Jie Zhang, Qi Chen, Shuping Hou and Li Xiao
Sensors 2022, 22(21), 8225; https://doi.org/10.3390/s22218225 - 27 Oct 2022
Cited by 2 | Viewed by 1488
Abstract
Ensemble learning systems (ELS) have been widely utilized for human activity recognition (HAR) with multiple homogeneous or heterogeneous sensors. However, traditional ensemble approaches for HAR cannot always work well due to insufficient accuracy and diversity of base classifiers, the absence of ensemble pruning, [...] Read more.
Ensemble learning systems (ELS) have been widely utilized for human activity recognition (HAR) with multiple homogeneous or heterogeneous sensors. However, traditional ensemble approaches for HAR cannot always work well due to insufficient accuracy and diversity of base classifiers, the absence of ensemble pruning, as well as the inefficiency of the fusion strategy. To overcome these problems, this paper proposes a novel selective ensemble approach with group decision-making (GDM) for decision-level fusion in HAR. As a result, the fusion process in the ELS is transformed into an abstract process that includes individual experts (base classifiers) making decisions with the GDM fusion strategy. Firstly, a set of diverse local base classifiers are constructed through the corresponding mechanism of the base classifier and the sensor. Secondly, the pruning methods and the number of selected base classifiers for the fusion phase are determined by considering the diversity among base classifiers and the accuracy of candidate classifiers. Two ensemble pruning methods are utilized: mixed diversity measure and complementarity measure. Thirdly, component decision information from the selected base classifiers is combined by using the GDM fusion strategy and the recognition results of the HAR approach can be obtained. Experimental results on two public activity recognition datasets (The OPPORTUNITY dataset; Daily and Sports Activity Dataset (DSAD)) suggest that the proposed GDM-based approach outperforms the well-known fusion techniques and other state-of-the-art approaches in the literature. Full article
(This article belongs to the Special Issue Human Activity Recognition in Smart Sensing Environment)
Show Figures

Figure 1

Figure 1
<p>The flowchart of the proposed ELS-based HAR approach. Firstly, in (<b>a</b>), diverse base classifiers are trained using training data corresponding to each body position. In (<b>b</b>), the optimal sub-ensemble of ELS is selected by taking into account two diversity measures. In (<b>c</b>), the GDM-based fusion method is utilized to combine the base classifiers. Finally, in (<b>d</b>), the final prediction is produced by the proposed ELS-based HAR model.</p>
Full article ">Figure 2
<p>Generalized flowchart of the proposed GDM-based fusion phase.</p>
Full article ">Figure 3
<p>The relationship between the accuracy and the ensemble scale of the ELS in OPPORTUNITY dataset.</p>
Full article ">Figure 4
<p>Performance comparison between base classifiers (1–8) and our fused classifier (9) for subject 1 in OPPORTUNITY dataset.</p>
Full article ">Figure 5
<p>Performance comparison between base classifiers (1–8) and our fused classifier (9) for subject 2 in OPPORTUNITY dataset.</p>
Full article ">Figure 6
<p>Performance comparison between base classifiers (1–8) and our fused classifier (9) for subject 3 in OPPORTUNITY dataset.</p>
Full article ">Figure 7
<p>Performance comparison between base classifiers (1–8) and our fused classifier (9) for subject 4 in OPPORTUNITY dataset.</p>
Full article ">Figure 8
<p>Confusion matrices comparison of different combination strategies for subject 1 in OPPORTUNITY dataset. (<b>a</b>) GDM fusion; (<b>b</b>) GA; (<b>c</b>) WA; (<b>d</b>) MV.</p>
Full article ">Figure 9
<p>The relationship between the accuracy and the ensemble scale of the ELS in DSAD dataset.</p>
Full article ">Figure 10
<p>Performance comparison between base classifiers (1–5) and our fused classifier (6) for subject 1 in DSAD dataset.</p>
Full article ">Figure 11
<p>Performance comparison between base classifiers (1–5) and our fused classifier (6) for subject 2 in DSAD dataset.</p>
Full article ">Figure 12
<p>Confusion matrices comparison of different combination strategies for subject 1 in DSAD dataset. (<b>a</b>) GDM fusion; (<b>b</b>) GA; (<b>c</b>) WA; (<b>d</b>) MV.</p>
Full article ">
22 pages, 3314 KiB  
Article
A Novel Segmentation Scheme with Multi-Probability Threshold for Human Activity Recognition Using Wearable Sensors
by Bangwen Zhou, Cheng Wang, Zhan Huan, Zhixin Li, Ying Chen, Ge Gao, Huahao Li, Chenhui Dong and Jiuzhen Liang
Sensors 2022, 22(19), 7446; https://doi.org/10.3390/s22197446 - 30 Sep 2022
Cited by 7 | Viewed by 1692
Abstract
In recent years, much research has been conducted on time series based human activity recognition (HAR) using wearable sensors. Most existing work for HAR is based on the manual labeling. However, the complete time serial signals not only contain different types of activities, [...] Read more.
In recent years, much research has been conducted on time series based human activity recognition (HAR) using wearable sensors. Most existing work for HAR is based on the manual labeling. However, the complete time serial signals not only contain different types of activities, but also include many transition and atypical ones. Thus, effectively filtering out these activities has become a significant problem. In this paper, a novel machine learning based segmentation scheme with a multi-probability threshold is proposed for HAR. Threshold segmentation (TS) and slope-area (SA) approaches are employed according to the characteristics of small fluctuation of static activity signals and typical peaks and troughs of periodic-like ones. In addition, a multi-label weighted probability (MLWP) model is proposed to estimate the probability of each activity. The HAR error can be significantly decreased, as the proposed model can solve the problem that the fixed window usually contains multiple kinds of activities, while the unknown activities can be accurately rejected to reduce their impacts. Compared with other existing schemes, computer simulation reveals that the proposed model maintains high performance using the UCI and PAMAP2 datasets. The average HAR accuracies are able to reach 97.71% and 95.93%, respectively. Full article
(This article belongs to the Special Issue Human Activity Recognition in Smart Sensing Environment)
Show Figures

Figure 1

Figure 1
<p>Two different segmentation methods of sliding window. (<b>a</b>) Time-based, and (<b>b</b>) Activity-based.</p>
Full article ">Figure 2
<p>Problem formalization of HAR.</p>
Full article ">Figure 3
<p>Framework diagram of the proposed scheme.</p>
Full article ">Figure 4
<p>Flow chart of the proposed TS algorithm. (<b>a</b>) TS algorithm, and (<b>b</b>) Optimization algorithm.</p>
Full article ">Figure 5
<p>Segmentation results under different thresholds. (<b>a</b>) small threshold, and (<b>b</b>) big threshold.</p>
Full article ">Figure 6
<p>The relative position of the interval cut by the algorithm and the manual annotation interval.</p>
Full article ">Figure 7
<p>The flow chart of the proposed SA algorithm.</p>
Full article ">Figure 8
<p>Triangle diagram of the lines connecting adjacent peaks and the related trough.</p>
Full article ">Figure 9
<p>The diagram of the proposed MLWP algorithm.</p>
Full article ">Figure 10
<p>Segmentation results of the proposed scheme in different data sets, (<b>a</b>) UCI data set and (<b>b</b>) PAMAP2 data set.</p>
Full article ">Figure 11
<p>Accuracies of two data sets at different thresholds. (<b>a</b>) UCI data set, and (<b>b</b>) PAMAP2 data set.</p>
Full article ">Figure 12
<p>The confusion matrix of the proposed scheme on different data sets. (<b>a</b>) UCI data set, and (<b>b</b>) PAMAP2 data set.</p>
Full article ">Figure 13
<p>Scatter comparison of x-axis acceleration of the proposed model on different data sets. (<b>a</b>) Ground truth of the UCI data set, (<b>b</b>) prediction results of the UCI data set, (<b>c</b>) ground truth of the PAMAP2 data set, and (<b>d</b>) prediction results of the PAMAP2 data set.</p>
Full article ">Figure 14
<p>The comparison of accuracy using different methods in the UCI data set.</p>
Full article ">Figure 15
<p>The comparison of accuracy using different methods in the PAMAP2 data set.</p>
Full article ">
13 pages, 9972 KiB  
Article
Cadence Detection in Road Cycling Using Saddle Tube Motion and Machine Learning
by Bernhard Hollaus, Jasper C. Volmer and Thomas Fleischmann
Sensors 2022, 22(16), 6140; https://doi.org/10.3390/s22166140 - 17 Aug 2022
Cited by 4 | Viewed by 2958
Abstract
Most commercial cadence-measurement systems in road cycling are strictly limited in their function to the measurement of cadence. Other relevant signals, such as roll angle, inclination or a round kick evaluation, cannot be measured with them. This work proposes an alternative cadence-measurement system [...] Read more.
Most commercial cadence-measurement systems in road cycling are strictly limited in their function to the measurement of cadence. Other relevant signals, such as roll angle, inclination or a round kick evaluation, cannot be measured with them. This work proposes an alternative cadence-measurement system with less of the mentioned restrictions, without the need for distinct cadence-measurement apparatus attached to the pedal and shaft of the road bicycle. The proposed design applies an inertial measurement unit (IMU) to the seating pole of the bike. In an experiment, the motion data were gathered. A total of four different road cyclists participated in this study to collect different datasets for neural network training and evaluation. In total, over 10 h of road cycling data were recorded and used to train the neural network. The network’s aim was to detect each revolution of the crank within the data. The evaluation of the data has shown that using pure accelerometer data from all three axes led to the best result in combination with the proposed network architecture. A working proof of concept was achieved with an accuracy of approximately 95% on test data. As the proof of concept can also be seen as a new method for measuring cadence, the method was compared with the ground truth. Comparing the ground truth and the predicted cadence, it can be stated that for the relevant range of 50 rpm and above, the prediction over-predicts the cadence with approximately 0.9 rpm with a standard deviation of 2.05 rpm. The results indicate that the proposed design is fully functioning and can be seen as an alternative method to detect the cadence of a road cyclist. Full article
(This article belongs to the Special Issue Human Activity Recognition in Smart Sensing Environment)
Show Figures

Figure 1

Figure 1
<p>The hardware block diagram with the SensorTile module (<b>A</b>) which contains the IMU, the SensorTile Cradle board in top (<b>B</b>) and bottom view (<b>C</b>) [<a href="#B28-sensors-22-06140" class="html-bibr">28</a>] and the Hall-effect sensor (<b>D</b>) [<a href="#B29-sensors-22-06140" class="html-bibr">29</a>]. The connecting cable between Hall-effect sensor and SensorTile module has the supply voltage in red, the ground in black, the analogous Hall-effect sensor output in violet, and a spare wire in green.</p>
Full article ">Figure 2
<p>The experimental setup for the measurement of the motion data and the cadence signal. The IMU is fixed to the saddle tube with the given orientation. The Hall-effect sensor is fixed on the frame opposite the permanent magnet on the crank arm.</p>
Full article ">Figure 3
<p>Structure of the neural network model, with layer types and sizes as indicated. <math display="inline"><semantics> <msub> <mi>N</mi> <mrow> <mi>b</mi> <mi>a</mi> <mi>t</mi> <mi>c</mi> <mi>h</mi> </mrow> </msub> </semantics></math> symbolises the variable dimension based on the number of batches.</p>
Full article ">Figure 4
<p>Pedal stroke signal from the Hall-effect sensor after pre-processing versus the output signal predicted by the network. The binary threshold indicates the threshold used to decide if the network output is mapped to 0 or 1.</p>
Full article ">Figure 5
<p>Neural network evaluation: true cadence as measured with the Hall-effect sensor versus predicted cadence from the network. (<b>a</b>) Comparison for a large section from the test dataset. (<b>b</b>) Comparison for a zoomed section from the test dataset.</p>
Full article ">Figure 6
<p>Bland–Altmann diagrams for the comparison of the IMU-ML method with the Hall-effect sensor used as ground truth. (<b>a</b>) Comparison for the entire cadence range in the dataset. (<b>b</b>) Comparison for 50 rpm and above.</p>
Full article ">
21 pages, 7753 KiB  
Article
Predicting Activity Duration in Smart Sensing Environments Using Synthetic Data and Partial Least Squares Regression: The Case of Dementia Patients
by Miguel Ortiz-Barrios, Eric Järpe, Matías García-Constantino, Ian Cleland, Chris Nugent, Sebastián Arias-Fonseca and Natalia Jaramillo-Rueda
Sensors 2022, 22(14), 5410; https://doi.org/10.3390/s22145410 - 20 Jul 2022
Cited by 2 | Viewed by 2348
Abstract
The accurate recognition of activities is fundamental for following up on the health progress of people with dementia (PwD), thereby supporting subsequent diagnosis and treatments. When monitoring the activities of daily living (ADLs), it is feasible to detect behaviour patterns, parse out the [...] Read more.
The accurate recognition of activities is fundamental for following up on the health progress of people with dementia (PwD), thereby supporting subsequent diagnosis and treatments. When monitoring the activities of daily living (ADLs), it is feasible to detect behaviour patterns, parse out the disease evolution, and consequently provide effective and timely assistance. However, this task is affected by uncertainties derived from the differences in smart home configurations and the way in which each person undertakes the ADLs. One adjacent pathway is to train a supervised classification algorithm using large-sized datasets; nonetheless, obtaining real-world data is costly and characterized by a challenging recruiting research process. The resulting activity data is then small and may not capture each person’s intrinsic properties. Simulation approaches have risen as an alternative efficient choice, but synthetic data can be significantly dissimilar compared to real data. Hence, this paper proposes the application of Partial Least Squares Regression (PLSR) to approximate the real activity duration of various ADLs based on synthetic observations. First, the real activity duration of each ADL is initially contrasted with the one derived from an intelligent environment simulator. Following this, different PLSR models were evaluated for estimating real activity duration based on synthetic variables. A case study including eight ADLs was considered to validate the proposed approach. The results revealed that simulated and real observations are significantly different in some ADLs (p-value < 0.05), nevertheless synthetic variables can be further modified to predict the real activity duration with high accuracy (R2(pred)>90%). Full article
(This article belongs to the Special Issue Human Activity Recognition in Smart Sensing Environment)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) The HINT layout. (<b>b</b>) The sensing capabilities of HINT.</p>
Full article ">Figure 2
<p>(<b>a</b>) A participant undertaking some of the ADLs in the kitchen and bedroom. (<b>b</b>) Human behaviour monitoring at HINT.</p>
Full article ">Figure 3
<p>Differences between synthetic and real activity duration. The ADLs in the first row from the left are: <span class="html-italic">Stay in bed</span> and <span class="html-italic">Use restroom</span>; second row: <span class="html-italic">Make breakfast</span> and <span class="html-italic">Get out of home</span>; third row: <span class="html-italic">Get cold drink</span> and <span class="html-italic">Stay in the office</span>; and fourth row: <span class="html-italic">Get hot drink</span> and <span class="html-italic">Cook dinner</span>.</p>
Full article ">

Review

Jump to: Research

35 pages, 5160 KiB  
Review
A Comprehensive Survey on Emerging Assistive Technologies for Visually Impaired Persons: Lighting the Path with Visible Light Communications and Artificial Intelligence Innovations
by Alexandru Lavric, Cătălin Beguni, Eduard Zadobrischi, Alin-Mihai Căilean and Sebastian-Andrei Avătămăniței
Sensors 2024, 24(15), 4834; https://doi.org/10.3390/s24154834 - 25 Jul 2024
Cited by 2 | Viewed by 3566
Abstract
In the context in which severe visual impairment significantly affects human life, this article emphasizes the potential of Artificial Intelligence (AI) and Visible Light Communications (VLC) in developing future assistive technologies. Toward this path, the article summarizes the features of some commercial assistance [...] Read more.
In the context in which severe visual impairment significantly affects human life, this article emphasizes the potential of Artificial Intelligence (AI) and Visible Light Communications (VLC) in developing future assistive technologies. Toward this path, the article summarizes the features of some commercial assistance solutions, and debates the characteristics of VLC and AI, emphasizing their compatibility with blind individuals’ needs. Additionally, this work highlights the AI potential in the efficient early detection of eye diseases. This article also reviews the existing work oriented toward VLC integration in blind persons’ assistive applications, showing the existing progress and emphasizing the high potential associated with VLC use. In the end, this work provides a roadmap toward the development of an integrated AI-based VLC assistance solution for visually impaired people, pointing out the high potential and some of the steps to follow. As far as we know, this is the first comprehensive work which focuses on the integration of AI and VLC technologies in visually impaired persons’ assistance domain. Full article
(This article belongs to the Special Issue Human Activity Recognition in Smart Sensing Environment)
Show Figures

Figure 1

Figure 1
<p>Illustration of the principles supporting visually impaired assistance systems: information that should have been received by sight is obtained based on different ambient perception sensors, processed and analyzed, and then the relevant content is provided to the user through one of the other senses (i.e., mainly hearing and touch).</p>
Full article ">Figure 2
<p>Illustration showing the evolution of visual impairment solutions.</p>
Full article ">Figure 3
<p>Illustration summarizing the main features of assisting technologies and systems for visually impaired persons.</p>
Full article ">Figure 4
<p>Schematic representation of a visible light communications architecture emphasizing the transmitter and receiver components.</p>
Full article ">Figure 5
<p>Visible light communications and positioning use case scenario: by using the indoor lighting infrastructure, visible light is used as a carrier for the data, providing high-data-rate wireless communications, whereas, based on signals received from several lighting sources (i.e., L1, L2, L3, L4), measuring the time-of-flight values (i.e., t<sub>1</sub>, t<sub>2</sub>, t<sub>3</sub>, t<sub>4</sub>) and converting them to distance estimations (i.e., d<sub>1</sub>, d<sub>2</sub>, d<sub>3</sub>, d<sub>4</sub>), high precision indoor localization is achieved, enabling indoor guidance services.</p>
Full article ">Figure 6
<p>Workflow of eye disease detection using AI algorithms.</p>
Full article ">Figure 7
<p>Visible light communication-based smart backpack for blind and severely visually impaired persons’ assistance: (a) Schematic representation; (<b>b</b>) 3D design [<a href="#B139-sensors-24-04834" class="html-bibr">139</a>].</p>
Full article ">Figure 8
<p>AI-driven VLC assistance solution development framework.</p>
Full article ">
18 pages, 3463 KiB  
Review
Sonification for Personalised Gait Intervention
by Conor Wall, Peter McMeekin, Richard Walker, Victoria Hetherington, Lisa Graham and Alan Godfrey
Sensors 2024, 24(1), 65; https://doi.org/10.3390/s24010065 - 22 Dec 2023
Cited by 5 | Viewed by 1821
Abstract
Mobility challenges threaten physical independence and good quality of life. Often, mobility can be improved through gait rehabilitation and specifically the use of cueing through prescribed auditory, visual, and/or tactile cues. Each has shown use to rectify abnormal gait patterns, improving mobility. Yet, [...] Read more.
Mobility challenges threaten physical independence and good quality of life. Often, mobility can be improved through gait rehabilitation and specifically the use of cueing through prescribed auditory, visual, and/or tactile cues. Each has shown use to rectify abnormal gait patterns, improving mobility. Yet, a limitation remains, i.e., long-term engagement with cueing modalities. A paradigm shift towards personalised cueing approaches, considering an individual’s unique physiological condition, may bring a contemporary approach to ensure longitudinal and continuous engagement. Sonification could be a useful auditory cueing technique when integrated within personalised approaches to gait rehabilitation systems. Previously, sonification demonstrated encouraging results, notably in reducing freezing-of-gait, mitigating spatial variability, and bolstering gait consistency in people with Parkinson’s disease (PD). Specifically, sonification through the manipulation of acoustic features paired with the application of advanced audio processing techniques (e.g., time-stretching) enable auditory cueing interventions to be tailored and enhanced. These methods used in conjunction optimize gait characteristics and subsequently improve mobility, enhancing the effectiveness of the intervention. The aim of this narrative review is to further understand and unlock the potential of sonification as a pivotal tool in auditory cueing for gait rehabilitation, while highlighting that continued clinical research is needed to ensure comfort and desirability of use. Full article
(This article belongs to the Special Issue Human Activity Recognition in Smart Sensing Environment)
Show Figures

Figure 1

Figure 1
<p>This graph displays the waveforms of musical notes A1 and A5 sampled at a rate of 44.1 kHz. The waveform of the A1 note has a lower frequency of 110 Hz, while the green curve represents the A5 note with a higher frequency of 880 Hz, demonstrating that the ‘closer’ the waves, the higher the pitch.</p>
Full article ">Figure 2
<p>The graph displays three sine waves, each generated at 440 Hz but with different amplitudes of 10 dB, 30 dB, and 50 dB, demonstrating that a higher waveform results in a higher amplitude.</p>
Full article ">Figure 3
<p>Displays waveforms of 3 audio signals: human voice, piano, and tuning fork, all centred around 440 Hz fundamental frequency. Timbre varies in all due to the difference in unique harmonic structure, seen in each waveform.</p>
Full article ">Figure 4
<p>A diagram outlining the steps commonly taken during the phase vocoder algorithm for pitch shifting and time-stretching purposes.</p>
Full article ">Figure 5
<p>A diagram illustrating the DRC algorithm and how it is used to increase and decrease gain on audio input based on a set threshold.</p>
Full article ">Figure 6
<p>This diagram provides examples, referred to in text, of how sonification and auditory manipulation techniques could be applied to gait rehabilitation, illustrating the extraction of various features from inertial measurement units (IMUs) on the feet. The IMU-based gait characteristics could undergo sonification to generate biofeedback acoustic variables through specialized auditory techniques, thereby changing the person’s gait.</p>
Full article ">Figure 7
<p>This box plot illustrates the distribution of mean/initial steps per minute (SPM) ratios for two conditions: verbal instruction and noise feedback—fixed target. The blue lines indicate the SPM ratio (1) and the target SPM ratio (1.5).</p>
Full article ">Figure 8
<p>Eta Squared (η2G%) values for the top five outcome measures. The bar chart represents the η2G% for the New Freezing of Gait Questionnaire (NFOGQ), Parkinson’s Disease Questionnaire-39 mobility domain (PDQ39 mobility), Unified Parkinson’s Disease Rating Scale Part III (UPDRSIII), PDQ39 bodily discomfort, and PDQ39 total. Notably, NFOGQ exhibits the highest η2G% value.</p>
Full article ">Figure 9
<p>A bar chart illustrating the results of the experiment, demonstrating the significant effectiveness of sonification in increasing step length.</p>
Full article ">
Back to TopTop