Intelligent ADL Recognition via IoT-Based Multimodal Deep Learning Framework
<p>The architecture diagram for multimodal IoT-based deep learning framework via ADL recognition.</p> "> Figure 2
<p>Sample signals after filters applied for motion sensor data.</p> "> Figure 3
<p>Detailed view of data segmentation applied over the inertial signal has been presented using multiple colors in the figure. The red dotted box shows single segment of data.</p> "> Figure 4
<p>(<b>a</b>) Real video frame and (<b>b</b>) extracted human figure after background extraction for bending activity in Berkeley-MHAD dataset.</p> "> Figure 5
<p>(<b>a</b>) Human silhouette (<b>b</b>) 2D stick model, where each red dot represents the body point detected, green lines show the upper body skeleton, and orange lines give the lower body skeleton.</p> "> Figure 6
<p>Extracted LPCCs result for the Jumping Jacks ADL over Berkeley-MHAD dataset.</p> "> Figure 7
<p>Upward motion direction flow in Jumping in Place ADL.</p> "> Figure 8
<p>Features optimization via genetic algorithm explained through a detailed view.</p> "> Figure 9
<p>Proposed CNN model for multimodal IoT-based ADL recognition over Berkeley-MHAD.</p> "> Figure 10
<p>Sample frame sequences from the Berkeley-MHAD [<a href="#B22-sensors-23-07927" class="html-bibr">22</a>] dataset.</p> "> Figure 11
<p>Sample frame sequences from Opportunity++ [<a href="#B21-sensors-23-07927" class="html-bibr">21</a>] dataset.</p> "> Figure 12
<p>Examples of problematic ADL activities over Berkeley-MHAD, where red dotted circles point out the skeleton extraction problems.</p> ">
Abstract
:1. Introduction
- A novel algorithm has been proposed for 2D stick model extraction in this study for supporting more efficient ADL recognition in less computational time.
- An algorithm for human body landmarks detection has been proposed to effectively recognize the daily locomotion activities.
- A genetic algorithm has been optimized using a state-of-the-art fitness formula proposed for video and inertial sensors-based ADL data.
- The proposed layers of the ADL recognition model support the delivery of a robust IoT-based multimodal system to achieve extraordinary efficiency.
2. Literature Review
2.1. Simple Modal Systems
2.2. IoT-Based Multimodal Systems
3. Materials and Methods
3.1. Pre-Processing of Inertial Sensor Signals
3.2. Pre-Processing of Videos
Algorithm 1: Landmark detection and 2D stick model creation |
3.3. Features Processing Layer
3.4. ADL Recognition Layer
4. Dataset Experimental Setup and Results
4.1. Datasets Description: Berkeley-MHAD and Opportunity++
4.2. Experimental Settings and Results
4.2.1. Experiment 1: Confusion Matrices over Opportunity++ and Berkeley-MHAD
4.2.2. Experiment 2: Confidence Levels over Skeleton Points
4.2.3. Experiment 3: Comparison with Other Important Classifiers
4.2.4. Experiment 4: Comparison with Other State-Of-The-Art Techniques in Literature
5. Discussion
6. Conclusions and Future Work
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Ali, M.; Ali, A.A.; Taha, A.-E.; Dhaou, I.B.; Gia, T.N. Intelligent Autonomous Elderly Patient Home Monitoring System. In Proceedings of the ICC 2019—2019 IEEE International Conference on Communications (ICC), Shanghai, China, 21–23 May 2019; pp. 1–6. [Google Scholar] [CrossRef]
- Madiha, J.; Ahmad, J.; Kim, K. Wearable Sensors based Exertion Recognition using Statistical Features and Random Forest for Physical Healthcare Monitoring. In Proceedings of the 2021 International Bhurban Conference on Applied Sciences and Technologies (IBCAST), Islamabad, Pakistan, 12–16 January 2021; pp. 512–517. [Google Scholar] [CrossRef]
- Zhou, X.; Zhang, L. SA-FPN: An effective feature pyramid network for crowded human detection. Appl. Intell. 2022, 52, 12556–12568. [Google Scholar] [CrossRef]
- Liu, Y.; Wang, K.; Liu, L.; Lan, H.; Lin, L. TCGL: Temporal Contrastive Graph for Self-Supervised Video Representation Learning. IEEE Trans. Image Process. 2022, 31, 1978–1993. [Google Scholar] [CrossRef] [PubMed]
- Gaddam, A.; Mukhopadhyay, S.C.; Gupta, G.S. Trial & experimentation of a smart home monitoring system for elderly. In Proceedings of the 2011 IEEE International Instrumentation and Measurement Technology Conference, Hangzhou, China, 9–12 May 2011; pp. 1–6. [Google Scholar] [CrossRef]
- Zouba, N.; Bremond, F.; Thonnat, M. An Activity Monitoring System for Real Elderly at Home: Validation Study. In Proceedings of the 2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance, Boston, MA, USA, 29 August–1 September 2010; pp. 278–285. [Google Scholar] [CrossRef]
- Chen, J.; Wang, Q.; Cheng, H.; Peng, W.; Xu, W. A Review of Vision-Based Traffic Semantic Understanding in ITSs. IEEE Trans. Intell. Transp. Syst. 2022, 23, 19954–19979. [Google Scholar] [CrossRef]
- Suryadevara, N.K.; Mukhopadhyay, S.C.; Rayudu, R.K.; Huang, Y.M. Sensor data fusion to determine wellness of an elderly in intelligent home monitoring environment. In Proceedings of the 2012 IEEE International Instrumentation and Measurement Technology Conference Proceedings, Graz, Austria, 13–16 May 2012; pp. 947–952. [Google Scholar] [CrossRef]
- Madiha, J.; Gochoo, M.; Jalal, A.; Kim, K. HF-SPHR: Hybrid Features for Sustainable Physical Healthcare Pattern Recognition Using Deep Belief Networks. Sustainability 2021, 13, 1699. [Google Scholar] [CrossRef]
- Foroughi, H.; Aski, B.S.; Pourreza, H. Intelligent video surveillance for monitoring fall detection of elderly in home environments. In Proceedings of the 2008 11th International Conference on Computer and Information Technology, Khulna, Bangladesh, 24–27 December 2008; pp. 219–224. [Google Scholar] [CrossRef]
- Bruno, B.; Mastrogiovanni, F.; Sgorbissa, A. A public domain dataset for ADL recognition using wrist-placed accelerometers. In Proceedings of the 23rd IEEE International Symposium on Robot and Human Interactive Communication, Edinburgh, UK, 25–29 August 2014; pp. 738–743. [Google Scholar] [CrossRef]
- Nguyen, T.-H.-C.; Nebel, J.-C.; Florez-Revuelta, F. Recognition of Activities of Daily Living with Egocentric Vision: A Review. Sensors 2016, 16, 72. [Google Scholar] [CrossRef] [PubMed]
- Gambi, E.; Temperini, G.; Galassi, R.; Senigagliesi, L.; De Santis, A. ADL Recognition Through Machine Learning Algorithms on IoT Air Quality Sensor Dataset. IEEE Sens. J. 2020, 20, 13562–13570. [Google Scholar] [CrossRef]
- Nisar, M.A.; Shirahama, K.; Li, F.; Huang, X.; Grzegorzek, M. Rank Pooling Approach for Wearable Sensor-Based ADLs Recognition. Sensors 2020, 20, 3463. [Google Scholar] [CrossRef]
- Wang, F.; Wang, H.; Zhou, X.; Fu, R. A Driving Fatigue Feature Detection Method Based on Multifractal Theory. IEEE Sens. J. 2022, 22, 19046–19059. [Google Scholar] [CrossRef]
- Nasution, A.H.; Emmanuel, S. Intelligent Video Surveillance for Monitoring Elderly in Home Environments. In Proceedings of the 2007 IEEE 9th Workshop on Multimedia Signal Processing, Chania, Greece, 1–3 October 2007; pp. 203–206. [Google Scholar] [CrossRef]
- Zhang, Z.; Cui, P.; Zhu, W. Deep Learning on Graphs: A Survey. IEEE Trans. Knowl. Data Eng. 2022, 34, 249–270. [Google Scholar] [CrossRef]
- Wang, A.; Zhao, S.; Zheng, C.; Yang, J.; Chen, G.; Chang, C.-Y. Activities of Daily Living Recognition with Binary Environment Sensors Using Deep Learning: A Comparative Study. IEEE Sens. J. 2021, 21, 5423–5433. [Google Scholar] [CrossRef]
- Ghayvat, H.; Pandya, S.; Patel, A. Deep Learning Model for Acoustics Signal Based Preventive Healthcare Monitoring and Activity of Daily Living. In Proceedings of the 2nd International Conference on Data, Engineering and Applications (IDEA), Bhopal, India, 28–29 February 2020; pp. 1–7. [Google Scholar] [CrossRef]
- Zerkouk, M.; Chikhaoui, B. Spatio-Temporal Abnormal Behavior Prediction in Elderly Persons Using Deep Learning Models. Sensors 2020, 20, 2359. [Google Scholar] [CrossRef] [PubMed]
- Ciliberto, M.; Rey, V.F.F.; Calatroni, A.; Lukowicz, P.; Roggen, D. Opportunity++: A Multimodal Dataset for Video- and Wearable, Object and Ambient Sensors-Based Human Activity Recognition. Front. Comput. Sci. 2021, 3, 2624. [Google Scholar] [CrossRef]
- Ofli, F.; Chaudhry, R.; Kurillo, G.; Vidal, R.; Bajcsy, R. Berkeley MHAD: A comprehensive Multimodal Human Action Database. In Proceedings of the 2013 IEEE Workshop on Applications of Computer Vision (WACV), Clearwater Beach, FL, USA, 15–17 January 2013; pp. 53–60. [Google Scholar] [CrossRef]
- Pires, I.M.; Marques, G.; Garcia, N.M.; Pombo, N.; Flórez-Revuelta, F.; Spinsante, S.; Teixeira, M.C.; Zdravevski, E. Recognition of Activities of Daily Living and Environments Using Acoustic Sensors Embedded on Mobile Devices. Electronics 2019, 8, 1499. [Google Scholar] [CrossRef]
- Hamim, M.; Paul, S.; Hoque, S.I.; Rahman, M.N.; Baqee, I.-A. IoT Based Remote Health Monitoring System for Patients and Elderly People. In Proceedings of the 2019 International Conference on Robotics, Electrical and Signal Processing Techniques (ICREST), Dhaka, Bangladesh, 10–12 January 2019; pp. 533–538. [Google Scholar] [CrossRef]
- Sridharan, M.; Bigham, J.; Campbell, P.M.; Phillips, C.; Bodanese, E. Inferring Micro-Activities Using Wearable Sensing for ADL Recognition of Home-Care Patients. IEEE J. Biomed. Health Inform. 2020, 24, 747–759. [Google Scholar] [CrossRef] [PubMed]
- Ferreira, J.M.; Pires, I.M.; Marques, G.; García, N.M.; Zdravevski, E.; Lameski, P.; Flórez-Revuelta, F.; Spinsante, S.; Xu, L. Activities of Daily Living and Environment Recognition Using Mobile Devices: A Comparative Study. Electronics 2020, 9, 180. [Google Scholar] [CrossRef]
- Rahman, S.; Irfan, M.; Raza, M.; Moyeezullah Ghori, K.; Yaqoob, S.; Awais, M. Performance Analysis of Boosting Classifiers in Recognizing Activities of Daily Living. Int. J. Environ. Res. Public Health 2020, 17, 1082. [Google Scholar] [CrossRef] [PubMed]
- Madhuranga, D.; Madhushan, R.; Siriwardane, C.; Gunasekera, K. Real-time multimodal ADL recognition using convolution neural network. Vis. Comput. 2021, 37, 1263–1276. [Google Scholar] [CrossRef]
- Achirei, S.-D.; Heghea, M.-C.; Lupu, R.-G.; Manta, V.-I. Human Activity Recognition for Assisted Living Based on Scene Understanding. Appl. Sci. 2022, 12, 10743. [Google Scholar] [CrossRef]
- Ghadi, Y.Y.; Batool, M.; Gochoo, M.; Alsuhibany, S.A.; Al Shloul, T.; Jalal, A.; Park, J. Improving the ambient intelligence living using deep learning classifier. Comput. Mater. Contin. 2022, 73, 1037–1053. [Google Scholar] [CrossRef]
- Ihianle, I.K.; Nwajana, A.O.; Ebenuwa, S.H.; Otuka, R.I.; Owa, K.; Orisatoki, M.O. A Deep Learning Approach for Human Activities Recognition from Multimodal Sensing Devices. IEEE Access 2020, 8, 179028–179038. [Google Scholar] [CrossRef]
- Ferrari, A.; Micucci, D.; Mobilio, M.; Napoletano, P. On the Personalization of Classification Models for Human Activity Recognition. IEEE Access 2020, 8, 32066–32079. [Google Scholar] [CrossRef]
- Yu, H.; Pan, G.; Pan, M.; Li, C.; Jia, W.; Zhang, L.; Sun, M. A Hierarchical Deep Fusion Framework for Egocentric Activity Recognition using a Wearable Hybrid Sensor System. Sensors 2019, 19, 546. [Google Scholar] [CrossRef] [PubMed]
- Madiha, J.; Mudawi, N.A.; Alabduallah, B.I.; Jalal, A.; Kim, W. A Multimodal IoT-Based Locomotion Classification System Using Features Engineering and Recursive Neural Network. Sensors 2023, 23, 4716. [Google Scholar] [CrossRef]
- Žarić, N.; Radonjić, M.; Pavlićević, N.; Paunović Žarić, S. Design of a Kitchen-Monitoring and Decision-Making System to Support AAL Applications. Sensors 2021, 21, 4449. [Google Scholar] [CrossRef]
- Thakur, N.; Han, C.Y. A Simplistic and Cost-Effective Design for Real-World Development of an Ambient Assisted Living System for Fall Detection and Indoor Localization: Proof-of-Concept. Information 2022, 13, 363. [Google Scholar] [CrossRef]
- Al Shloul, T.; Javeed, M.; Gochoo, M.; Alsuhibany, S.A.; Ghadi, Y.Y.; Jalal, A.; Park, J. Student’s health exercise recognition tool for E-learning education. Intell. Autom. Soft Comput. 2023, 35, 149–161. [Google Scholar] [CrossRef]
- Zhang, J.; Tang, Y.; Wang, H.; Xu, K. ASRO-DIO: Active Subspace Random Optimization Based Depth Inertial Odometry. IEEE Trans. Robot. 2023, 39, 1496–1508. [Google Scholar] [CrossRef]
- Akhtar, I.; Ahmad, J.; Kim, K. Adaptive Pose Estimation for Gait Event Detection Using Context-Aware Model and Hierarchical Optimization. J. Electr. Eng. Technol. 2021, 16, 2721–2729. [Google Scholar] [CrossRef]
- Akhter, I.; Hafeez, S. Human Body 3D Reconstruction and Gait Analysis via Features Mining Framework. In Proceedings of the 2022 19th International Bhurban Conference on Applied Sciences and Technology (IBCAST), Islamabad, Pakistan, 16–20 August 2022; pp. 189–194. [Google Scholar] [CrossRef]
- Madiha, J.; Ahmad, J. Body-worn Hybrid-Sensors based Motion Patterns Detection via Bag-of-features and Fuzzy Logic Optimization. In Proceedings of the 2021 International Conference on Innovative Computing (ICIC), Lahore, Pakistan, 9–10 November 2021; pp. 1–7. [Google Scholar] [CrossRef]
- Shen, Y.; Ding, N.; Zheng, H.-T.; Li, Y.; Yang, M. Modeling Relation Paths for Knowledge Graph Completion. IEEE Trans. Knowl. Data Eng. 2021, 33, 3607–3617. [Google Scholar] [CrossRef]
- Madiha, J.; Chelloug, S.A. Automated gestures recognition in Exergaming. In Proceedings of the 2022 International conference on Electrical Engineering and Sustainable Technologies (ICEEST), Lahore, Pakistan, 14–15 December 2022. [Google Scholar]
- Ghadi, Y.Y.; Javeed, M.; Alarfaj, M.; Al Shloul, T.; Alsuhibany, S.A.; Jalal, A.; Kamal, S.; Kim, D.-S. MS-DLD: Multi-sensors based daily locomotion detection via kinematic-static energy and body-specific HMMs. IEEE Access 2022, 10, 23964–23979. [Google Scholar] [CrossRef]
- Javeed, M.; Shorfuzzaman, M.; Alsufyani, N.; Chelloug, S.A.; Jalal, A.; Park, J. Physical human locomotion prediction using manifold regularization. PeerJ Comput. Sci. 2022, 8, e1105. [Google Scholar] [CrossRef] [PubMed]
- Wei, H.; Jafari, R.; Kehtarnavaz, N. Fusion of Video and Inertial Sensing for Deep Learning–Based Human Action Recognition. Sensors 2020, 19, 3680. [Google Scholar] [CrossRef] [PubMed]
- Zou, W.; Sun, Y.; Zhou, Y.; Lu, Q.; Nie, Y.; Sun, T.; Peng, L. Limited Sensing and Deep Data Mining: A New Exploration of Developing City-Wide Parking Guidance Systems. IEEE Intell. Transp. Syst. Mag. 2022, 14, 198–215. [Google Scholar] [CrossRef]
- Gumaei, A.; Hassan, M.M.; Alelaiwi, A.; Alsalman, H. A Hybrid Deep Learning Model for Human Activity Recognition Using Multimodal Body Sensing Data. IEEE Access 2019, 7, 99152–99160. [Google Scholar] [CrossRef]
- Taylor, W.; Shah, S.A.; Dashtipour, K.; Zahid, A.; Abbasi, Q.H.; Imran, M.A. An Intelligent Non-Invasive Real-Time Human Activity Recognition System for Next-Generation Healthcare. Sensors 2020, 20, 2653. [Google Scholar] [CrossRef]
- Cheng, B.; Wang, M.; Zhao, S.; Zhai, Z.; Zhu, D.; Chen, J. Situation-Aware Dynamic Service Coordination in an IoT Environment. IEEE/ACM Trans. Netw. 2017, 25, 2082–2095. [Google Scholar] [CrossRef]
- Zhong, T.; Wang, W.; Lu, S.; Dong, X.; Yang, B. RMCHN: A Residual Modular Cascaded Heterogeneous Network for Noise Suppression in DAS-VSP Records. IEEE Geosci. Remote Sens. Lett. 2023, 20, 7500205. [Google Scholar] [CrossRef]
- Cao, K.; Ding, H.; Wang, B.; Lv, L.; Tian, J.; Wei, Q.; Gong, F. Enhancing Physical-Layer Security for IoT With Nonorthogonal Multiple Access Assisted Semi-Grant-Free Transmission. IEEE Internet Things J. 2022, 9, 24669–24681. [Google Scholar] [CrossRef]
- Abiodun, O.I.; Jantan, A.; Omolara, A.E.; Dada, K.V.; Umar, A.M.; Linus, O.U.; Arshad, H.; Kazaure, A.A.; Gana, U.; Kiru, M.U. Comprehensive Review of Artificial Neural Network Applications to Pattern Recognition. IEEE Access 2019, 7, 158820–158846. [Google Scholar] [CrossRef]
- Li, D.; Ge, S.S.; Lee, T.H. Fixed-Time-Synchronized Consensus Control of Multiagent Systems. IEEE Trans. Control Netw. Syst. 2021, 8, 89–98. [Google Scholar] [CrossRef]
- Wang, F.; Li, Z.; He, F.; Wang, R.; Yu, W.; Nie, F. Feature Learning Viewpoint of Adaboost and a New Algorithm. IEEE Access 2019, 7, 149890–149899. [Google Scholar] [CrossRef]
- Randhawa, K.; Loo, C.K.; Seera, M.; Lim, C.P.; Nandi, A.K. Credit Card Fraud Detection Using AdaBoost and Majority Voting. IEEE Access 2018, 6, 14277–14284. [Google Scholar] [CrossRef]
- Zheng, Y.; Lv, X.; Qian, L.; Liu, X. An Optimal BP Neural Network Track Prediction Method Based on a GA– ACO Hybrid Algorithm. J. Mar. Sci. Eng. 2022, 10, 1399. [Google Scholar] [CrossRef]
- Liao, Q.; Chai, H.; Han, H.; Zhang, X.; Wang, X.; Xia, W.; Ding, Y. An Integrated Multi-Task Model for Fake News Detection. IEEE Trans. Knowl. Data Eng. 2022, 34, 5154–5165. [Google Scholar] [CrossRef]
- Akhter, I.; Javeed, M.; Jalal, A. Deep Skeleton Modeling and Hybrid Hand-crafted Cues over Physical Exercises. In Proceedings of the 2023 International Conference on Communication, Computing and Digital Systems (C-CODE), Islamabad, Pakistan, 17–18 May 2023; pp. 1–6. [Google Scholar] [CrossRef]
- Azmat, U.; Jalal, A.; Javeed, M. Multi-sensors Fused IoT-based Home Surveillance via Bag of Visual and Motion Features. In Proceedings of the 2023 International Conference on Communication, Computing and Digital Systems (C-CODE), Islamabad, Pakistan, 17–18 May 2023; pp. 1–6. [Google Scholar] [CrossRef]
- Lannan, N.; Zhou, L.; Fan, G. Human Motion Enhancement via Tobit Kalman Filter-Assisted Autoencoder. IEEE Access 2022, 10, 29233–29251. [Google Scholar] [CrossRef]
- Tian, Y.; Li, H.; Cui, H.; Chen, J. Construction motion data library: An integrated motion dataset for on-site activity recognition. Sci. Data 2022, 9, 726. [Google Scholar] [CrossRef]
- Lannan, N.; Zhou, L.; Fan, G. A Multiview Depth-based Motion Capture Benchmark Dataset for Human Motion Denoising and Enhancement Research. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), New Orleans, LA, USA, 19–24 June 2022; pp. 426–435. [Google Scholar] [CrossRef]
- Zhang, X.; Huang, D.; Li, H.; Zhang, Y.; Xia, Y.; Liu, J. Self-training maximum classifier discrepancy for EEG emotion recognition. CAAI Trans. Intell. Technol. 2023. early view. [Google Scholar] [CrossRef]
- Li, L.; Wu, X.; Kong, M.; Liu, J.; Zhang, J. Quantitatively Interpreting Residents Happiness Prediction by Considering Factor–Factor Interactions. IEEE Trans. Comput. Soc. Syst. 2023, 10. [Google Scholar] [CrossRef]
- Dai, X.; Xiao, Z.; Jiang, H.; Alazab, M.; Lui, J.C.S.; Dustdar, S.; Liu, J. Task Co-Offloading for D2D-Assisted Mobile Edge Computing in Industrial Internet of Things. IEEE Trans. Ind. Inform. 2023, 19, 480–490. [Google Scholar] [CrossRef]
- Jiang, H.; Xiao, Z.; Li, Z.; Xu, J.; Zeng, F.; Wang, D. An Energy-Efficient Framework for Internet of Things Underlaying Heterogeneous Small Cell Networks. IEEE Trans. Mob. Comput. 2022, 21, 31–43. [Google Scholar] [CrossRef]
- Lv, Z.; Qiao, L.; Li, J.; Song, H. Deep-learning-enabled security issues in the internet of things. IEEE Internet Things J. 2020, 8, 9531–9538. [Google Scholar] [CrossRef]
- Jiang, H.; Wang, M.; Zhao, P.; Xiao, Z.; Dustdar, S. A Utility-Aware General Framework with Quantifiable Privacy Preservation for Destination Prediction in LBSs. IEEE/ACM Trans. Netw. 2021, 29, 2228–2241. [Google Scholar] [CrossRef]
- Liu, H.; Yuan, H.; Liu, Q.; Hou, J.; Zeng, H.; Kwong, S. A Hybrid Compression Framework for Color Attributes of Static 3D Point Clouds. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 1564–1577. [Google Scholar] [CrossRef]
- Liu, H.; Yuan, H.; Hou, J.; Hamzaoui, R.; Gao, W. PUFA-GAN: A Frequency-Aware Generative Adversarial Network for 3D Point Cloud Upsampling. IEEE Trans. Image Process. 2022, 31, 7389–7402. [Google Scholar] [CrossRef] [PubMed]
- Mi, C.; Huang, S.; Zhang, Y.; Zhang, Z.; Postolache, O. Design and Implementation of 3-D Measurement Method for Container Handling Target. J. Mar. Sci. Eng. 2022, 10, 1961. [Google Scholar] [CrossRef]
- Bao, N.; Zhang, T.; Huang, R.; Biswal, S.; Su, J.; Wang, Y.; Cha, Y. A Deep Transfer Learning Network for Structural Condition Identification with Limited Real-World Training Data. Struct. Control Health Monit. 2023, 2023, 8899806. [Google Scholar] [CrossRef]
- Lv, Z.; Song, H. Mobile internet of things under data physical fusion technology. IEEE Internet Things J. 2019, 7, 4616–4624. [Google Scholar] [CrossRef]
- Lu, S.; Liu, M.; Yin, L.; Yin, Z.; Liu, X.; Zheng, W.; Kong, X. The multi-modal fusion in visual question answering: A review of attention mechanisms. PeerJ Comput. Sci. 2023, 9, e1400. [Google Scholar] [CrossRef]
- Cheng, B.; Zhu, D.; Zhao, S.; Chen, J. Situation-Aware IoT Service Coordination Using the Event-Driven SOA Paradigm. IEEE Trans. Netw. Serv. Manag. 2016, 13, 349–361. [Google Scholar] [CrossRef]
IoT-Based ADL | OD1 | OD2 | CD1 | CD2 | OF | CF | ODW | CDW | ODW1 | CDW1 | ODW2 | CDW2 | ODW3 | CDW3 | CT | DC | TS |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
OD1 * | 8 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
OD2 | 0 | 9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 |
CD1 | 0 | 0 | 9 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
CD2 | 0 | 0 | 1 | 8 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
OF | 1 | 0 | 0 | 0 | 9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
CF | 0 | 0 | 0 | 0 | 0 | 8 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 |
ODW | 0 | 0 | 0 | 0 | 1 | 0 | 8 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 |
CDW | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
ODW1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 |
CDW1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
ODW2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 9 | 0 | 0 | 0 | 0 | 0 | 0 |
CDW2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 | 0 | 0 | 0 | 0 | 1 |
ODW3 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 | 0 | 0 | 0 | 0 |
CDW3 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 8 | 0 | 0 | 0 |
CT | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 9 | 0 | 0 |
DC | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 8 | 0 |
TS | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 8 |
Mean accuracy = 84.12% |
IoT-Based ADL | JIP | JJ | Ben | Pun | WaT | WaO | CH | TB | SiT | SD | SU | TP |
---|---|---|---|---|---|---|---|---|---|---|---|---|
JIP * | 9 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 |
JJ | 0 | 8 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 |
Ben | 1 | 0 | 9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Pun | 0 | 0 | 1 | 8 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 |
WaT | 0 | 0 | 0 | 0 | 9 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
WaO | 0 | 1 | 0 | 0 | 0 | 8 | 1 | 0 | 0 | 0 | 0 | 0 |
CH | 0 | 0 | 0 | 1 | 0 | 0 | 8 | 0 | 0 | 0 | 1 | 0 |
TB | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 8 | 0 | 0 | 0 | 0 |
SiT | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 9 | 0 | 0 | 0 |
SD | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 8 | 0 | 0 |
SU | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 | 1 |
TP | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 8 |
Mean accuracy = 84.17% |
Human Skeleton Points | Confidence Level for Berkeley-MHAD | Confidence Level for Opportunity++ |
---|---|---|
Head | 0.83 | 0.85 |
Neck | 0.99 | 0.98 |
Right Elbow | 0.83 | 0.85 |
Left Elbow | 0.81 | 0.88 |
Right Wrist | 0.74 | 0.78 |
Left Wrist | 0.77 | 0.78 |
Torso | 0.87 | 0.88 |
Right knee | 0.79 | 0.84 |
Left knee | 0.65 | 0.75 |
Right ankle | 0.67 | 0.66 |
Left ankle | 0.71 | 0.77 |
Mean Confidence | 0.72 | 0.75 |
Locomotor Activities | Artificial Neural Network | AdaBoost | CNN | ||||||
---|---|---|---|---|---|---|---|---|---|
Precision | Recall | F1-Score | Precision | Recall | F1-Score | Precision | Recall | F1-Score | |
JIP | 0.78 | 0.77 | 0.77 | 0.80 | 0.81 | 0.80 | 0.90 | 0.82 | 0.85 |
JJ | 0.74 | 0.71 | 0.72 | 0.73 | 0.78 | 0.75 | 0.80 | 0.89 | 0.84 |
Ben | 0.77 | 0.74 | 0.75 | 0.77 | 0.78 | 0.77 | 0.90 | 0.82 | 0.85 |
Pun | 0.70 | 0.72 | 0.70 | 0.73 | 0.71 | 0.71 | 0.80 | 0.89 | 0.84 |
WaT | 0.77 | 0.79 | 0.77 | 0.81 | 0.82 | 0.81 | 0.90 | 0.69 | 0.78 |
WaO | 0.81 | 0.80 | 0.80 | 0.88 | 0.87 | 0.87 | 0.80 | 0.80 | 0.80 |
CH | 0.74 | 0.80 | 0.76 | 0.79 | 0.75 | 0.76 | 0.80 | 0.89 | 0.84 |
TB | 0.77 | 0.77 | 0.77 | 0.71 | 0.75 | 0.72 | 0.80 | 0.89 | 0.84 |
SiT | 0.79 | 0.88 | 0.83 | 0.85 | 0.86 | 0.85 | 0.90 | 0.90 | 0.90 |
SD | 0.76 | 0.77 | 0.76 | 0.79 | 0.78 | 0.78 | 0.80 | 0.89 | 0.84 |
SU | 0.81 | 0.82 | 0.81 | 0.74 | 0.76 | 0.74 | 0.90 | 0.90 | 0.90 |
TP | 0.82 | 0.84 | 0.82 | 0.88 | 0.90 | 0.88 | 0.80 | 0.80 | 0.80 |
Mean | 0.77 | 0.78 | 0.77 | 0.79 | 0.80 | 0.78 | 0.84 | 0.85 | 0.84 |
Locomotor Activities | Artificial Neural Network | AdaBoost | CNN | ||||||
---|---|---|---|---|---|---|---|---|---|
Precision | Recall | F1-Score | Precision | Recall | F1-Score | Precision | Recall | F1-Score | |
OD1 | 0.82 | 0.87 | 0.84 | 0.77 | 0.79 | 0.77 | 0.80 | 0.73 | 0.76 |
OD2 | 0.74 | 0.71 | 0.72 | 0.80 | 0.73 | 0.76 | 0.90 | 0.90 | 0.90 |
CD1 | 0.77 | 0.79 | 0.77 | 0.78 | 0.80 | 0.78 | 0.90 | 0.82 | 0.85 |
CD2 | 0.73 | 0.75 | 0.73 | 0.77 | 0.71 | 0.73 | 0.80 | 0.89 | 0.84 |
OF | 0.69 | 0.68 | 0.68 | 0.78 | 0.74 | 0.75 | 0.90 | 0.82 | 0.85 |
CF | 0.85 | 0.81 | 0.82 | 0.74 | 0.85 | 0.79 | 0.80 | 0.89 | 0.84 |
ODW | 0.64 | 0.68 | 0.65 | 0.61 | 0.63 | 0.61 | 0.80 | 0.89 | 0.84 |
CDW | 0.87 | 0.81 | 0.83 | 0.77 | 0.76 | 0.76 | 0.90 | 0.75 | 0.81 |
ODW1 | 0.77 | 0.71 | 0.73 | 0.78 | 0.79 | 0.78 | 0.80 | 0.89 | 0.84 |
CDW1 | 0.72 | 0.73 | 0.72 | 0.80 | 0.79 | 0.79 | 0.80 | 0.73 | 0.76 |
ODW2 | 0.77 | 0.79 | 0.77 | 0.84 | 0.82 | 0.82 | 0.90 | 1.00 | 0.94 |
CDW2 | 0.83 | 0.81 | 0.81 | 0.80 | 0.80 | 0.80 | 0.90 | 0.75 | 0.81 |
ODW3 | 0.74 | 0.79 | 0.76 | 0.87 | 0.81 | 0.83 | 0.80 | 0.80 | 0.80 |
CDW3 | 0.89 | 0.88 | 0.88 | 0.78 | 0.80 | 0.78 | 0.80 | 0.80 | 0.80 |
CT | 0.75 | 0.79 | 0.76 | 0.71 | 0.70 | 0.70 | 0.90 | 0.90 | 0.90 |
DC | 0.88 | 0.89 | 0.88 | 0.80 | 0.86 | 0.82 | 0.80 | 0.80 | 0.80 |
TS | 0.77 | 0.78 | 0.77 | 0.79 | 0.79 | 0.79 | 0.80 | 0.89 | 0.84 |
Mean | 0.77 | 0.78 | 0.77 | 0.77 | 0.77 | 0.76 | 0.84 | 0.83 | 0.83 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Javeed, M.; Mudawi, N.A.; Alazeb, A.; Almakdi, S.; Alotaibi, S.S.; Chelloug, S.A.; Jalal, A. Intelligent ADL Recognition via IoT-Based Multimodal Deep Learning Framework. Sensors 2023, 23, 7927. https://doi.org/10.3390/s23187927
Javeed M, Mudawi NA, Alazeb A, Almakdi S, Alotaibi SS, Chelloug SA, Jalal A. Intelligent ADL Recognition via IoT-Based Multimodal Deep Learning Framework. Sensors. 2023; 23(18):7927. https://doi.org/10.3390/s23187927
Chicago/Turabian StyleJaveed, Madiha, Naif Al Mudawi, Abdulwahab Alazeb, Sultan Almakdi, Saud S. Alotaibi, Samia Allaoua Chelloug, and Ahmad Jalal. 2023. "Intelligent ADL Recognition via IoT-Based Multimodal Deep Learning Framework" Sensors 23, no. 18: 7927. https://doi.org/10.3390/s23187927
APA StyleJaveed, M., Mudawi, N. A., Alazeb, A., Almakdi, S., Alotaibi, S. S., Chelloug, S. A., & Jalal, A. (2023). Intelligent ADL Recognition via IoT-Based Multimodal Deep Learning Framework. Sensors, 23(18), 7927. https://doi.org/10.3390/s23187927