Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Two-Lane DNN Equalizer Using Balanced Random-Oversampling for W-Band PS-16QAM RoF Delivery over 4.6 km
Previous Article in Journal
Quantification of Humidity and Salt Detection in Historical Building Materials via Broadband Radar Measurement
Previous Article in Special Issue
Bus Violence: An Open Benchmark for Video Violence Detection on Public Transport
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

Multi-Sensors for Human Activity Recognition

by
Athina Tsanousa
1,*,
Georgios Meditskos
2,
Stefanos Vrochidis
1 and
Ioannis Kompatsiaris
1
1
Centre for Research and Technology Hellas, Information Technologies Institute, 6th Km Charilaou-Thermi, 57001 Thessaloniki, Greece
2
School of Informatics, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(10), 4617; https://doi.org/10.3390/s23104617
Submission received: 26 April 2023 / Accepted: 4 May 2023 / Published: 10 May 2023
(This article belongs to the Special Issue Multi-Sensor for Human Activity Recognition)

1. Introduction

Human activity recognition (HAR) has made significant progress in recent years, with growing applications in various domains, and the emergence of wearable and ambient sensors has provided new opportunities in the field. The use of multi-sensor systems, which integrate data from multiple sensors, has the potential to improve the accuracy and reliability of human activity recognition. This Special Issue of Sensors on “Multi-Sensors for Human Activity Recognition” brings together works from various domains to share their findings on a variety of aspects regarding the monitoring of human activity. This Special Issue focused on the current state-of-the-art works related to the broader field of HAR, with a special emphasis on multi-sensor environments. The following section offers a summary of the nine featured articles regarding applications of HAR in smart homes, speech and gesture recognition and security issues of IoT systems, among others.

2. Overview of Contribution

Activity recognition can have applications in the security domain, for surveillance purposes, which is usually achieved through visual sensors. The types of activities relevant to this application are violent actions. There are already available databases regarding general violence detection; however, the authors of [1] contributed to this field with a large set of annotated data, the Bus Violence dataset, with violent actions entirely located on public transport. The dataset is a large collection of video clips from multiple cameras for detecting simulated acts of violence on public transport. The paper also presents an application of DL (deep learning) methods for detecting harmful activities.
Assisted living environments are probably the field with the most applications of HAR. Numerous works can be found in the relevant literature, testing a variety of methods and sensors for monitoring the activities of people living in these environments and for detecting harmful events. This paper [2] tested the performance of algorithms for multi-resident activity recognition, which is approached as a multilabel classification (MLC) problem. Using two public datasets (ARAS and CASAS), the authors tested the following algorithms: the random k-labelsets algorithm (RAkELd), an ensemble method for multilabel classification; binary relevance; and the classifier chain method, which is another widely used MLC ensemble method and constructs a chain of binary classifiers, where the number of classifiers is the same as the number of labels in the dataset.
This review [3] presents the challenges of the applications of HAR in smart homes, the algorithms and works in this field and any identified gaps. The authors divide the HAR systems into two categories: video-based and sensor-based systems. Video-based systems raise some privacy issues. The authors review papers including data-driven approaches (DDA) and knowledge-driven approaches (KDA) for HAR. There is a detailed reference to methods for feature extraction, an important step in sensor data modeling. The segmentation of temporal data is also discussed.
In the sports field, the interest of researchers lies in a subfield of HAR, pose estimation. Its applications can be found in everyday apps in smartwatches and smartphones for monitoring exercise, burnt calories, etc. HAR systems can be adapted to be applied in specific sports and detect certain movements. Likewise, in [4], the authors employed computer vision in the martial arts domain, aiming to identify postures performed by karatekas. The authors proposed a system that will be able to recognize the correct execution of an entire series of movements by a karateka.
An important aspect of multi-sensor systems, besides their performance in recognition tasks, is the secure exchange and storage of data. The authors in [5] explored the security challenges of an IoT localization system and proposed a blockchain-based distributed paradigm to secure localization services. The proposed system is strongly focused on the protection of the users’ privacy.
A sub-category of HAR is the recognition of hand gestures that can be exploited to control home appliances and robots or to assist in communication with people who are deaf and people who cannot speak. In [6], a system with dual cameras was proposed for both static and dynamic gesture recognition. The authors propose a hardware architecture that improves execution speed while maintaining high efficiency.
In the broader context of monitoring human activity, this Special Issue published a paper [7] monitoring the use of facial masks in the COVID-19 era and specifically the exhalation of CO2 from people wearing four different types of masks. Using a multi-sensor system of four low-cost carbon dioxide (CO2) sensors, the authors measured CO2 in two indoor spaces and created spatial heatmaps to visualize the CO2 concentration.
Besides data-driven approaches, knowledge-based approaches are also adopted in multi-sensor IoT environments to allow interoperability and represent data and events. The authors in [8] presented a semantic web approach for detecting lifestyle and health-related activities from wearable sensors, in a use case about improving the care of patients with multiple sclerosis (MS). The paper described a lightweight framework for detecting lifestyle and health-related problems and supporting the integration of a variety of lifestyle wearable sensors. To achieve interoperability at different levels, the authors used OWL 2 ontologies to generate interoperable knowledge graphs that are aligned with existing vocabularies and conceptual models.
An application of HAR regarding safety while driving is presented in [9], where the authors propose a system for voice and hand gesture recognition, which monitors human-vehicle interaction and replaces the driver’s need to control the in-vehicle infotainment system, reducing distractions for the driver and, consequently, possible fatal accidents. The authors applied sensor fusion techniques to perform multi-sensor monitoring and a binarized convolutional neural network algorithm to reduce the computational workload of the CNN in classifying speech and hand commands.

Author Contributions

Original draft preparation, A.T.; writing—review and editing, A.T. and G.M.; supervision, S.V. and I.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received funding from the European Union’s Horizon 2020 research and innovation programme ALAMEDA, under grant agreement No. GA101017558, and from the REA project (T1EDK-00686), co-financed by the European Regional Development Fund of the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship, and Innovation, under the call RESEARCH–CREATE–INNOVATE.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ciampi, L.; Foszner, P.; Messina, N.; Staniszewski, M.; Gennaro, C.; Falchi, F.; Serao, G.; Cogiel, M.; Golba, D.; Szczęsna, A.; et al. Bus Violence: An Open Benchmark for Video Violence Detection on Public Transport. Sensors 2022, 22, 8345. [Google Scholar] [CrossRef] [PubMed]
  2. Bouchabou, D.; Nguyen, S.M.; Lohr, C.; LeDuc, B.; Kanellos, I. A Survey of Human Activity Recognition in Smart Homes Based on IoT Sensors Algorithms: Taxonomies, Challenges, and Opportunities with Deep Learning. Sensors 2021, 21, 6037. [Google Scholar] [CrossRef] [PubMed]
  3. Lentzas, A.; Dalagdi, E.; Vrakas, D. Multilabel Classification Methods for Human Activity Recognition: A Comparison of Algorithms. Sensors 2022, 22, 2353. [Google Scholar] [CrossRef] [PubMed]
  4. Echeverria, J.; Santos, O.C. Toward Modeling Psychomotor Performance in Karate Combats Using Computer Vision Pose Estimation. Sensors 2021, 21, 8378. [Google Scholar] [CrossRef] [PubMed]
  5. Saia, R.; Podda, A.S.; Pompianu, L.; Reforgiato Recupero, D.; Fenu, G. A Blockchain-Based Distributed Paradigm to Secure Localization Services. Sensors 2021, 21, 6814. [Google Scholar] [CrossRef] [PubMed]
  6. Tsai, T.-H.; Tsai, Y.-R. Architecture Design and VLSI Implementation of 3D Hand Gesture Recognition System. Sensors 2021, 21, 6724. [Google Scholar] [CrossRef] [PubMed]
  7. Salman, N.; Khan, M.W.; Lim, M.; Khan, A.; Kemp, A.H.; Noakes, C.J. Use of Multiple Low Cost Carbon Dioxide Sensors to Measure Exhaled Breath Distribution with Face Mask Type and Wearing Behaviour. Sensors 2021, 21, 6204. [Google Scholar] [CrossRef] [PubMed]
  8. Stavropoulos, T.G.; Meditskos, G.; Lazarou, I.; Mpaltadoros, L.; Papagiannopoulos, S.; Tsolaki, M.; Kompatsiaris, I. Detection of Health-Related Events and Behaviours from Wearable Sensor Lifestyle Data Using Symbolic Intelligence: A Proof-of-Concept Application in the Care of Multiple Sclerosis. Sensors 2021, 21, 6230. [Google Scholar] [CrossRef] [PubMed]
  9. Oh, S.; Bae, C.; Cho, J.; Lee, S.; Jung, Y. Command Recognition Using Binarized Convolutional Neural Network with Voice and Radar Sensors for Human-Vehicle Interaction. Sensors 2021, 21, 3906. [Google Scholar] [CrossRef] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tsanousa, A.; Meditskos, G.; Vrochidis, S.; Kompatsiaris, I. Multi-Sensors for Human Activity Recognition. Sensors 2023, 23, 4617. https://doi.org/10.3390/s23104617

AMA Style

Tsanousa A, Meditskos G, Vrochidis S, Kompatsiaris I. Multi-Sensors for Human Activity Recognition. Sensors. 2023; 23(10):4617. https://doi.org/10.3390/s23104617

Chicago/Turabian Style

Tsanousa, Athina, Georgios Meditskos, Stefanos Vrochidis, and Ioannis Kompatsiaris. 2023. "Multi-Sensors for Human Activity Recognition" Sensors 23, no. 10: 4617. https://doi.org/10.3390/s23104617

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop