Group Decision Making-Based Fusion for Human Activity Recognition in Body Sensor Networks
<p>The flowchart of the proposed ELS-based HAR approach. Firstly, in (<b>a</b>), diverse base classifiers are trained using training data corresponding to each body position. In (<b>b</b>), the optimal sub-ensemble of ELS is selected by taking into account two diversity measures. In (<b>c</b>), the GDM-based fusion method is utilized to combine the base classifiers. Finally, in (<b>d</b>), the final prediction is produced by the proposed ELS-based HAR model.</p> "> Figure 2
<p>Generalized flowchart of the proposed GDM-based fusion phase.</p> "> Figure 3
<p>The relationship between the accuracy and the ensemble scale of the ELS in OPPORTUNITY dataset.</p> "> Figure 4
<p>Performance comparison between base classifiers (1–8) and our fused classifier (9) for subject 1 in OPPORTUNITY dataset.</p> "> Figure 5
<p>Performance comparison between base classifiers (1–8) and our fused classifier (9) for subject 2 in OPPORTUNITY dataset.</p> "> Figure 6
<p>Performance comparison between base classifiers (1–8) and our fused classifier (9) for subject 3 in OPPORTUNITY dataset.</p> "> Figure 7
<p>Performance comparison between base classifiers (1–8) and our fused classifier (9) for subject 4 in OPPORTUNITY dataset.</p> "> Figure 8
<p>Confusion matrices comparison of different combination strategies for subject 1 in OPPORTUNITY dataset. (<b>a</b>) GDM fusion; (<b>b</b>) GA; (<b>c</b>) WA; (<b>d</b>) MV.</p> "> Figure 9
<p>The relationship between the accuracy and the ensemble scale of the ELS in DSAD dataset.</p> "> Figure 10
<p>Performance comparison between base classifiers (1–5) and our fused classifier (6) for subject 1 in DSAD dataset.</p> "> Figure 11
<p>Performance comparison between base classifiers (1–5) and our fused classifier (6) for subject 2 in DSAD dataset.</p> "> Figure 12
<p>Confusion matrices comparison of different combination strategies for subject 1 in DSAD dataset. (<b>a</b>) GDM fusion; (<b>b</b>) GA; (<b>c</b>) WA; (<b>d</b>) MV.</p> ">
Abstract
:1. Introduction
2. Related Works
3. The Proposed ELS-Based HAR Approach
3.1. Phase One: Base Classifier Generation
3.2. Phase Two: ELS Pruning
3.2.1. Mixed Diversity Measure
3.2.2. Mixed Diversity Measure
3.3. Phase Three: GDM-Based Decision-Level Fusion
3.3.1. Determination of the Initial Weights of Experts Based on Genetic Algorithm Optimization
- (1)
- Chromosome coding
- (2)
- Fitness function
- (3)
- Selection operation
- (4)
- Crossover operation
- (5)
- Mutation operation
3.3.2. Weight Update Based on Deviation
3.3.3. The Fusion Process Based on GDM
- Step 1
- The GA is utilized to give the initial weight of each base classifier (expert) in the ELS.
- Step 2
- Each expert in the ELS is used to recognize the activity samples and the decision results of the respective experts are obtained which are corresponding to Uk in Equation (7);
- Step 3
- The group decision result G is achieved by the Equation (7), which corresponds to the final recognition result of ELS;
- Step 4
- Update the expert weight through Equations (10)–(12) according to the result Uk of each expert and the group decision recognition result G.
- Step 5
- Generate the final ensemble learning model and input the testing samples to test its effectiveness.
4. Experimental Protocol
4.1. Data Sets
4.2. Feature Extraction
4.3. Measures of Performance
5. Experimental Results
5.1. Results on UCI Opportunity Dataset
5.1.1. Effectiveness of the Mixed Diversity Measure and Complementarity Measure
5.1.2. Comparison with the Individual Classifiers
5.1.3. Comparison with Other Fusion Strategies and HAR Approaches
5.2. Results on UCI DSAD
5.2.1. Effectiveness of the Mixed Diversity Measure and Complementarity Measure
5.2.2. Comparison with the Individual Classifiers
5.2.3. Comparison with Other Fusion Strategies and HAR Approaches
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Uddin, M.Z.; Hassan, M.M.; Alsanad, A.; Savaglio, C. A body sensor data fusion and deep recurrent neural network-based be-havior recognition approach for robust healthcare. Inf. Fusion 2020, 55, 105–115. [Google Scholar] [CrossRef]
- Yunas, S.U.; Ozanyan, K.B. Gait Activity Classification Using Multi-Modality Sensor Fusion: A Deep Learning Approach. IEEE Sens. J. 2021, 21, 16870–16879. [Google Scholar] [CrossRef]
- Wang, Y.; Cang, S.; Yu, H. A survey on wearable sensor modality centred human activity recognition in health care. Expert Syst. Appl. 2019, 137, 167–190. [Google Scholar] [CrossRef]
- De, P.; Chatterjee, A.; Rakshit, A. Recognition of Human Behavior for Assisted Living Using Dictionary Learning Approach. IEEE Sens. J. 2017, 18, 2434–2441. [Google Scholar] [CrossRef]
- Wang, J.; Wang, Z.; Gao, F.; Zhao, H.; Qiu, S.; Li, J. Swimming Stroke Phase Segmentation Based on Wearable Motion Capture Technique. IEEE Trans. Instrum. Meas. 2020, 69, 8526–8538. [Google Scholar] [CrossRef]
- Muzammal, M.; Talat, R.; Sodhro, A.H.; Pirbhulal, S. A multi-sensor data fusion enabled ensemble approach for medical data from body sensor networks. Inf. Fusion 2020, 53, 155–164. [Google Scholar] [CrossRef]
- Majumder, S.; Kehtarnavaz, N. Vision and inertial sensing fusion for human action recognition: A review. IEEE Sens. J. 2021, 21, 2454–2467. [Google Scholar] [CrossRef]
- Reyes-Ortiz, J.L.; Oneto, L.; Sam, A.; Parra, X.; Anguita, D. Transition-aware human activity recognition using smartphones. Neurocomputing 2016, 171, 754–767. [Google Scholar] [CrossRef] [Green Version]
- Choudhury, N.A.; Moulik, S.; Roy, D.S. Physique-Based Human Activity Recognition Using Ensemble Learning and Smartphone Sensors. IEEE Sens. J. 2021, 21, 16852–16860. [Google Scholar] [CrossRef]
- Liaqat, S.; Dashtipour, K.; Shah, S.A.; Rizwan, A.; Alotaibi, A.A.; Althobaiti, T.; Arshad, K.; Assaleh, K.; Ramzan, N. Novel Ensemble Algorithm for Multiple Activity Recognition in Elderly People Exploiting Ubiquitous Sensing Devices. IEEE Sens. J. 2021, 21, 18214–18221. [Google Scholar] [CrossRef]
- Das, A.; Sil, P.; Singh, P.K.; Bhateja, V.; Sarkar, R. MMHAR-EnsemNet: A Multi-Modal Human Activity Recognition Model. IEEE Sens. J. 2021, 21, 11569–11576. [Google Scholar] [CrossRef]
- Alotaibi, B. Transportation Mode Detection by Embedded Sensors Based on Ensemble Learning. IEEE Access 2020, 8, 145552–145563. [Google Scholar] [CrossRef]
- Tarafdar, P.; Bose, I. Recognition of human activities for wellness management using a smartphone and a smartwatch: A boosting approach. Decis. Support Syst. 2020, 140, 113426. [Google Scholar] [CrossRef]
- Nweke, H.F.; Teh, Y.W.; Mujtaba, G.; Al-Garadi, M.A. Data fusion and multiple classifier systems for human activity detection and health monitoring: Review and open research directions. Inf. Fusion 2019, 46, 147–170. [Google Scholar] [CrossRef]
- Webber, M.; Rojas, R.F. Human Activity Recognition With Accelerometer and Gyroscope: A Data Fusion Approach. IEEE Sensors J. 2021, 21, 16979–16989. [Google Scholar] [CrossRef]
- Richoz, S.; Wang, L.; Birch, P.; Roggen, D. Transportation mode recognition fusing wearable motion, sound and vision sensors. IEEE Sensors J. 2020, 20, 9314–9328. [Google Scholar] [CrossRef] [Green Version]
- Zhang, W.; Zhao, X.; Li, Z. A Comprehensive Study of Smartphone-Based Indoor Activity Recognition via Xgboost. IEEE Access 2019, 7, 80027–80042. [Google Scholar] [CrossRef]
- Chen, C.; Jafari, R.; Kehtarnavaz, N. Improving Human Action Recognition Using Fusion of Depth Camera and Inertial Sensors. IEEE Trans. Hum. Mach. Syst. 2014, 45, 51–61. [Google Scholar] [CrossRef]
- Dong, Y.; Li, X.; Dezert, J.; Khyam, M.O.; Rahim, N.A.; Ge, S.S. Dezert-Smarandache Theory-Based Fusion for Human Activity Recognition in Body Sensor Networks. IEEE Trans. Ind. Inform. 2020, 16, 7138–7149. [Google Scholar] [CrossRef]
- Chen, Z.; Jiang, C.; Xie, L. A Novel Ensemble ELM for Human Activity Recognition Using Smartphone Sensors. IEEE Trans. Ind. Inform. 2018, 15, 2691–2699. [Google Scholar] [CrossRef]
- Cao, J.J.; Li, W.F.; Ma, C.C.; Tao, Z.W. Optimizing multi-sensor deployment via ensemble pruning for wearable activity recog-nition. Inf. Fusion 2018, 41, 68–79. [Google Scholar] [CrossRef]
- Cui, W.; Li, B.; Zhang, L.; Chen, Z. Device-free single-user activity recognition using diversified deep ensemble learning. Appl. Soft Comput. 2021, 102, 107066. [Google Scholar] [CrossRef]
- Chen, M.; Li, Y.; Luo, X.; Wang, W.; Wang, L.; Zhao, W. A Novel Human Activity Recognition Scheme for Smart Health Using Multilayer Extreme Learning Machine. IEEE Internet Things J. 2018, 6, 1410–1418. [Google Scholar] [CrossRef]
- Lu, K.-D.; Wu, Z.-G. Genetic Algorithm-Based Cumulative Sum Method for Jamming Attack Detection of Cyber-Physical Power Systems. IEEE Trans. Instrum. Meas. 2022, 71, 1–10. [Google Scholar] [CrossRef]
- Available online: http://archive.ics.uci.edu/ml/datasets/OPPORTUNITY+Activity+Recognition (accessed on 4 January 2021).
- Available online: http://archive.ics.uci.edu/ml/datasets/Daily+and+Sports+Activities (accessed on 22 March 2022).
- Saeedi, S.; El-Sheimy, N. Activity recognition using fusion of low-cost sensors on a smarthphone for mobile navigation application. Micromachines 2015, 6, 1100–1134. [Google Scholar] [CrossRef]
- Chavarriaga, R.; Sagha, H.; Calatroni, A.; Digumarti, S.T.; Tröster, G.; Millán, J.D.R.; Roggen, D. The Opportunity challenge: A benchmark database for on-body sensor-based activity recognition. Pattern Recognit. Lett. 2013, 34, 2033–2042. [Google Scholar] [CrossRef] [Green Version]
- Ye, J.; Qi, G.-J.; Zhuang, N.; Hu, H.; Hua, K.A. Learning Compact Features for Human Activity Recognition Via Probabilistic First-Take-All. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 42, 126–139. [Google Scholar] [CrossRef]
- Hammad, I.; El-Sankary, K. Practical Considerations for Accuracy Evaluation in Sensor-Based Machine Learning and Deep Learning. Sensors 2019, 19, 3491. [Google Scholar] [CrossRef] [Green Version]
- Veeriah, V.; Zhuang, N.; Qi, G.J. Differential recurrent neural networks for action recognition. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 4041–4049. [Google Scholar]
- Altun, K.; Barshan, B.; Tunçel, O. Comparative study on classifying human activities with miniature inertial and magnetic sensors. Pattern Recognit. 2010, 43, 3605–3620. [Google Scholar] [CrossRef]
Subject | Selected Sensors |
---|---|
Subject 1 | 1. RKN/Acc, 2. BACK/Acc, 3. BACK/Magn, 4. RUA/Acc, 5. BACK/Gyro, 6. RLA/Acc, 7. LUA/Gyro, 8 LLA/Acc |
Subject 2 | 1. RKN/Acc, 2. HIP/Acc, 3. RLA/Magn, 4. LUA/Gyro, 5. LLA/Acc, 6. BACK/Gyro, 7. RWR/Acc, 8. RH/Acc |
Subject 3 | 1. RKN/Acc, 2. HIP/Acc, 3. LUA/Acc, 4. LUA/Gyro, 5. RUA/Acc, 6. BACK/Magn, 7. RUA/Gyro, 8. LH/Acc |
Subject 4 | 1. BACK/Acc, 2. RKN/Acc, 3. LUA/Acc, 4. RUA/Acc, 5. LLA/Acc, 6. LUA/Magn, 7. BACK/Acc, 8. HIP/Acc |
Subject | Selected Sensors |
---|---|
Subject 1 | 1. RLA/Acc, 2. RUA/Gyro, 3. LUA/Gyro, 4. RUA/Acc, 5. BACK/Acc, 6. LLA/Magn, 7. LH/Acc, 8 RWR/Acc |
Subject 2 | 1. RUA/Acc, 2. RH/Acc, 3. BACK/Gyro, 4. LUA/Acc, 5. HIP/Acc, 6. BACK/Acc, 7. RUA/Acc, 8. RUA/Magn |
Subject 3 | 1. RKN/Acc, 2. LUA/Acc, 3. BACK/Acc, 4. BACK/Gyro, 5. RLA/Acc, 6. LUA/Gyro, 7. LLA/Acc, 8. HIP/Acc |
Subject 4 | 1. LUA/Acc, 2. LLA/Gyro, 3. BACK/Acc, 4. LLA/Acc, 5. LUA/Gyro, 6. BACK/Gyro, 7. RH/Acc, 8. LUA/ Magn |
Methods | Accuracy | |||
---|---|---|---|---|
Subject 1 | Subject 2 | Subject 3 | Subject 4 | |
MV | 82.50% | 84.79% | 79.54% | 80.84% |
WA | 85.80% | 86.87% | 82.43% | 86.82% |
GA | 92.13% | 88.32% | 84.58% | 88.52% |
Proposed method | 94.79% | 90.99% | 87.89% | 92.32% |
Methods | Accuracy | |||
---|---|---|---|---|
Subject 1 | Subject 2 | Subject 3 | Subject 4 | |
DSmt-based method [19] | 97.14% | 88.69% | 84.39% | 92.62% |
Ensemble-extreme learning machine [20] | 91.4% | 88.43% | 87.14% | 88.3% |
Extreme learning machine [21] | 70.56% | 71.26% | 65.87% | 71.54% |
Naïve bayes [27] | 87.42% | 84.01% | 82.1% | 85.17% |
Nearest centroid classifier [28] | 83.05% | 87.18% | 76.47% | 81.85% |
Proposed method (GDM-based fusion) | 94.79% | 90.99% | 87.89% | 93.32% |
Subject | Selected Sensors |
---|---|
Subject 1 | 1. T/yacc, 2. LA/zgyro, 3. T/xacc, 4. T/zacc, 5. RA/zmag, 6. LL/ ygyro, 7. RL/xmag, 8. RA/yacc |
Subject 2 | 1. T/xacc, 2. RL/xmag, 3. RA/yacc, 4. T/ zgyro, 5. LL/ zacc, 6. LA/ yacc, 7. T/xmag, 8. RL/zacc |
Subject 3 | 1. RA/yacc, 2. T/yacc, 3. LL/ xmag, 4. LA/zacc, 5. T/xmag, 6. T/xacc, 7. RA/zgyro, 8. LL/ xgyro |
Subject 4 | 1. T/zacc, 2. T/xmag, 3. RL/zgyro, 4. T/xgyro, 5. LA/ymag, 6. RA/zmag, 7. RA/xacc, 8. T/xmag |
Methods | Accuracy | |||
---|---|---|---|---|
Subject 1 | Subject 2 | Subject 3 | Subject 4 | |
MV | 88.64% | 85.42% | 86.85% | 89.72% |
WA | 90.35% | 86.04% | 92.28% | 91.64% |
GA | 91.47% | 87.87% | 91.88% | 93.47% |
Proposed method | 95.84% | 91.82% | 94.03% | 97.76% |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Tian, Y.; Zhang, J.; Chen, Q.; Hou, S.; Xiao, L. Group Decision Making-Based Fusion for Human Activity Recognition in Body Sensor Networks. Sensors 2022, 22, 8225. https://doi.org/10.3390/s22218225
Tian Y, Zhang J, Chen Q, Hou S, Xiao L. Group Decision Making-Based Fusion for Human Activity Recognition in Body Sensor Networks. Sensors. 2022; 22(21):8225. https://doi.org/10.3390/s22218225
Chicago/Turabian StyleTian, Yiming, Jie Zhang, Qi Chen, Shuping Hou, and Li Xiao. 2022. "Group Decision Making-Based Fusion for Human Activity Recognition in Body Sensor Networks" Sensors 22, no. 21: 8225. https://doi.org/10.3390/s22218225
APA StyleTian, Y., Zhang, J., Chen, Q., Hou, S., & Xiao, L. (2022). Group Decision Making-Based Fusion for Human Activity Recognition in Body Sensor Networks. Sensors, 22(21), 8225. https://doi.org/10.3390/s22218225