A Fusion-Assisted Multi-Stream Deep Learning and ESO-Controlled Newton–Raphson-Based Feature Selection Approach for Human Gait Recognition
<p>Proposed framework of HGR using two-stream fusion-assisted deep learning and optimal features selection.</p> "> Figure 2
<p>Proposed fusion of filters for contrast enhancement of moving subject.</p> "> Figure 3
<p>Original bottleneck architecture of MobilenetV2.</p> "> Figure 4
<p>Original architecture of ShuffleNet deep model.</p> "> Figure 5
<p>Visual illustration of deep feature extraction using deep transfer learning.</p> "> Figure 6
<p>Visual analysis of intermediate steps of the proposed framework.</p> "> Figure 7
<p>Comparison of proposed ESOcNR feature selection algorithm with original ESO based selection.</p> ">
Abstract
:1. Introduction
1.1. Existing Techniques
1.2. Major Challenges
1.3. Major Contributions
- A contrast enhancement technique based on local and global filter information fusion is proposed. The high-boost operation is finally applied to highlight the human region in a video frame.
- The data augmentation was performed, and two fine-tuned deep learning models (MobilenetV2 and ShuffleNet) were trained using deep transfer learning. Features were extracted from the flattened global average pooling layers instead of the fully connected layer.
- Features of both streams were fused in a serial-based fashion that can minimize the loss of information and then select the best features using a new approach called ESO-controlled Newton–Raphson.
- The detailed ablation study-based results have been computed and discussed, showing the improvement in the accuracy of this work.
1.4. Manuscript Organization
2. Proposed Methodology
2.1. Novelty 1: Hybrid Fusion Enhancement Technique
2.2. Pre-Trained Deep Models
2.3. Deep Transfer Learning based Features Extraction
2.4. Novelty 2: Minimal Serial Features Fusion
Algorithm 1: Proposed Feature Fusion. |
Input: Feature Vectors Step 1: Serially Fused using below Equation Step 2: for I = 1 to sizeof(FV3) Step 3: Initialize the static parameter Er = 0 Iterations = 100 Step 4: Minimization Function if(F > Er) Repeat above steps Stop when Er near to Zero or Equal to Zero Output: Final Fused Feature Vector |
2.5. Novelty 3: Proposed ESOcNR Feature Selection
3. Results and Analysis
3.1. Results Analysis
3.2. Comparative Analysis
4. Conclusions
- The training of deep learning models on the enhanced dataset has extracted the more useful features that later improved the accuracy.
- The proposed fusion approach improved the accuracy but increased the computation time.
- The original ESO-based feature selection approach selected some redundant features that reduced the classification accuracy.
- Selection of best features using the proposed ESOcNR maintains the classification accuracy and reduces the computational time of the fusion process.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Jain, A.K.; Ross, A.; Prabhakar, S. An Introduction to Biometric Recognition; IEEE: New York, NY, USA, 2004; pp. 4–20. [Google Scholar]
- Jain, A.K.; Ross, A.; Pankanti, S. Biometrics: A tool for information security. IEEE Trans. Inf. Forensics Secur. 2006, 1, 125–143. [Google Scholar] [CrossRef] [Green Version]
- Liu, L.; Wang, H.; Li, H.; Liu, J.; Qiu, S.; Zhao, H.; Guo, X. Ambulatory human gait phase detection using wearable inertial sensors and hidden Markov model. Sensors 2021, 21, 1347. [Google Scholar] [CrossRef]
- Deligianni, F.; Guo, Y.; Yang, G.-Z. From emotions to mood disorders: A survey on gait analysis methodology. IEEE J. Biomed. Health Inform. 2019, 23, 2302–2316. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Liao, R.; Li, Z.; Bhattacharyya, S.S.; York, G. PoseMapGait: A model-based gait recognition method with pose estimation maps and graph convolutional networks. Neurocomputing 2022, 501, 514–528. [Google Scholar] [CrossRef]
- Gafurov, D. A survey of biometric gait recognition: Approaches, security and challenges. In Proceedings of the Annual Norwegian Computer Science Conference, Bergen, Norway, 20–24 August 2007; pp. 19–21. [Google Scholar]
- Khan, M.A.; Kadry, S.; Parwekar, P.; Damaševičius, R.; Mehmood, A.; Khan, J.A.; Naqvi, S.R. Human gait analysis for osteoarthritis prediction: A framework of deep learning and kernel extreme learning machine. Complex Intell. Syst. 2021, 1–19. [Google Scholar] [CrossRef]
- Bijalwan, V.; Semwal, V.B.; Mandal, T. Fusion of multi-sensor-based biomechanical gait analysis using vision and wearable sensor. IEEE Sens. J. 2021, 21, 14213–14220. [Google Scholar] [CrossRef]
- Bayat, N.; Rastegari, E.; Li, Q. Human Gait Recognition Using Bag of Words Feature Representation Method. arXiv 2022, arXiv:2203.13317. [Google Scholar]
- Derlatka, M.; Borowska, M. Ensemble of heterogeneous base classifiers for human gait recognition. Sensors 2023, 23, 508. [Google Scholar] [CrossRef]
- Turk, M.A.; Pentland, A.P. Face recognition using eigenfaces. In Proceedings of the 1991 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Maui, HI, USA, 3–6 June 1991; IEEE Computer Society: Washington, DC, USA, 1991; pp. 586–591. [Google Scholar]
- Kim, D.; Paik, J. Gait recognition using active shape model and motion prediction. IET Comput. Vision 2010, 4, 25–36. [Google Scholar] [CrossRef]
- Wang, L.; Li, Y.; Xiong, F.; Zhang, W. Gait Recognition Using Optical Motion Capture: A Decision Fusion Based Method. Sensors 2021, 21, 3496. [Google Scholar] [CrossRef]
- Liao, R.; An, W.; Li, Z.; Bhattacharyya, S.S. A novel view synthesis approach based on view space covering for gait recognition. Neurocomputing 2021, 453, 13–25. [Google Scholar] [CrossRef]
- Slemenšek, J.; Fister, I.; Geršak, J.; Bratina, B.; van Midden, V.M.; Pirtošek, Z.; Šafarič, R. Human Gait Activity Recognition Machine Learning Methods. Sensors 2023, 23, 745. [Google Scholar] [CrossRef] [PubMed]
- Pinčić, D.; Sušanj, D.; Lenac, K. Gait Recognition with Self-Supervised Learning of Gait Features Based on Vision Transformers. Sensors 2022, 22, 7140. [Google Scholar] [CrossRef] [PubMed]
- Wan, C.; Wang, L.; Phoha, V.V. A survey on gait recognition. ACM Comput. Surv. (CSUR) 2018, 51, 1–35. [Google Scholar] [CrossRef] [Green Version]
- Xu, C.; Makihara, Y.; Li, X.; Yagi, Y. Occlusion-aware Human Mesh Model-based Gait Recognition. IEEE Trans. Inf. Forensics Secur. 2023, 18, 1309–1321. [Google Scholar] [CrossRef]
- Zhu, H.; Zheng, Z.; Nevatia, R. Gait Recognition Using 3-D Human Body Shape Inference. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 2–7 January 2023; pp. 909–918. [Google Scholar]
- Shi, L.-F.; Liu, Z.-Y.; Zhou, K.-J.; Shi, Y.; Jing, X. Novel Deep Learning Network for Gait Recognition Using Multimodal Inertial Sensors. Sensors 2023, 23, 849. [Google Scholar] [CrossRef]
- Kececi, A.; Yildirak, A.; Ozyazici, K.; Ayluctarhan, G.; Agbulut, O.; Zincir, I. Implementation of machine learning algorithms for gait recognition. Eng. Sci. Technol. Int. J. 2020, 23, 931–937. [Google Scholar] [CrossRef]
- Dou, H.; Zhang, W.; Zhang, P.; Zhao, Y.; Li, S.; Qin, Z.; Wu, F.; Dong, L.; Li, X. VersatileGait: A large-scale synthetic gait dataset with fine-Grained Attributes and complicated scenarios. arXiv 2021, arXiv:2101.01394. [Google Scholar]
- Zhu, Z.; Guo, X.; Yang, T.; Huang, J.; Deng, J.; Huang, G.; Du, D.; Lu, J.; Zhou, J. Gait recognition in the wild: A benchmark. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 14789–14799. [Google Scholar]
- Huang, C.; Zhang, F.; Xu, Z.; Wei, J. The Diverse Gait Dataset: Gait segmentation using inertial sensors for pedestrian localization with different genders, heights and walking speeds. Sensors 2022, 22, 1678. [Google Scholar] [CrossRef]
- Hasan, M.A.M.; Al Abir, F.; Al Siam, M.; Shin, J. Gait recognition with wearable sensors using modified residual block-based lightweight cnn. IEEE Access 2022, 10, 42577–42588. [Google Scholar] [CrossRef]
- An, W.; Yu, S.; Makihara, Y.; Wu, X.; Xu, C.; Yu, Y.; Liao, R.; Yagi, Y. Performance evaluation of model-based gait on multi-view very large population database with pose sequences. IEEE Trans. Biom. Behav. Identity Sci. 2020, 2, 421–430. [Google Scholar] [CrossRef]
- Tian, Y.; Wei, L.; Lu, S.; Huang, T. Free-view gait recognition. PLoS ONE 2019, 14, e0214389. [Google Scholar] [CrossRef] [PubMed]
- Faramarzi, A.; Heidarinejad, M.; Stephens, B.; Mirjalili, S. Equilibrium optimizer: A novel optimization algorithm. Knowl. Based Syst. 2020, 191, 105190. [Google Scholar] [CrossRef]
- Gnetchejo, P.J.; Essiane, S.N.; Dadjé, A.; Ele, P. A combination of Newton-Raphson method and heuristics algorithms for parameter estimation in photovoltaic modules. Heliyon 2021, 7, e06673. [Google Scholar] [CrossRef]
- Khan, A.; Khan, M.A.; Javed, M.Y.; Alhaisoni, M.; Tariq, U.; Kadry, S.; Choi, J.-I.; Nam, Y. Human gait recognition using deep learning and improved ant colony optimization. Comput. Mater. Contin. 2022, 70, 2113–2130. [Google Scholar] [CrossRef]
- Asif, M.; Tiwana, M.I.; Khan, U.S.; Ahmad, M.W.; Qureshi, W.S.; Iqbal, J. Human gait recognition subject to different covariate factors in a multi-view environment. Results Eng. 2022, 15, 100556. [Google Scholar] [CrossRef]
- Wang, L.; Zhang, X.; Han, R.; Yang, J.; Li, X.; Feng, W.; Wang, S. A Benchmark of Video-Based Clothes-Changing Person Re-Identification. arXiv 2022, arXiv:2211.11165. [Google Scholar]
- Rao, P.S.; Sahu, G.; Parida, P.; Patnaik, S. An Adaptive Firefly Optimization Algorithm for Human Gait Recognition. In Smart and Sustainable Technologies: Rural and Tribal Development Using IoT and Cloud Computing: Proceedings of ICSST 2021; Springer: Berlin/Heidelberg, Germany, 2022; pp. 305–316. [Google Scholar]
- Mehmood, A.; Khan, M.A.; Sharif, M.; Khan, S.A.; Shaheen, M.; Saba, T.; Riaz, N.; Ashraf, I. Prosperous human gait recognition: An end-to-end system based on pre-trained CNN features selection. Multimed. Tools Appl. 2020, 1–21. [Google Scholar] [CrossRef]
- Anusha, R.; Jaidhar, C. Clothing invariant human gait recognition using modified local optimal oriented pattern binary descriptor. Multimed. Tools Appl. 2020, 79, 2873–2896. [Google Scholar] [CrossRef]
CASIA-B Dataset | Before Augmentation No. of Images | After Augmentation No. of Images | |
---|---|---|---|
0 | Bag | 2347 | 9388 |
Coat | 2436 | 9744 | |
Normal | 2191 | 8764 | |
18 | Bag | 2457 | 9828 |
Coat | 2428 | 9712 | |
Normal | 2550 | 10,200 | |
36 | Bag | 2459 | 9836 |
Coat | 2530 | 10,120 | |
Normal | 2285 | 9140 | |
54 | Bag | 2578 | 10,312 |
Coat | 2661 | 10,644 | |
Normal | 2433 | 9732 | |
72 | Bag | 2531 | 10,124 |
Coat | 2582 | 10,328 | |
Normal | 2471 | 9884 | |
90 | Bag | 2512 | 10,048 |
Coat | 2717 | 10,868 | |
Normal | 2320 | 9280 | |
108 | Bag | 2363 | 9452 |
Coat | 2753 | 11,012 | |
Normal | 2647 | 10,588 | |
126 | Bag | 2472 | 9888 |
Coat | 2301 | 9204 | |
Normal | 2418 | 9672 | |
144 | Bag | 2436 | 9744 |
Coat | 2452 | 9808 | |
Normal | 2438 | 9752 | |
162 | Bag | 2466 | 9864 |
Coat | 2528 | 10,112 | |
Normal | 2422 | 9688 | |
180 | Bag | 2423 | 9692 |
Coat | 2674 | 10,696 | |
Normal | 2625 | 10,500 |
Model | Layer | Input | Output | Feature Vector |
---|---|---|---|---|
Fine-tuned MobilenetV2 | 154 | 224 × 224 × 3 | 1280 | N × 1280 |
Fine-tuned ShuffleNet | 172 | 224 × 224 × 3 | 544 | N × 544 |
Classifiers | Features | Recall (%) | Precision (%) | Accuracy (%) | Time (s) | |
---|---|---|---|---|---|---|
Fusion | Optimization | |||||
Fine Tree | ✔ | 94.8 | 94.83 | 94.8 | 154.08 | |
✔ | 94.63 | 94.63 | 94.6 | 61.748 | ||
Medium Tree | ✔ | 95.17 | 95.2 | 95.1 | 69.438 | |
✔ | 93 | 93.2 | 93.0 | 38.502 | ||
Linear SVM | ✔ | 97.3 | 97.3 | 97.2 | 466.93 | |
✔ | 97.17 | 97.2 | 97.1 | 192.66 | ||
Quadratic SVM | ✔ | 97.33 | 97.37 | 97.3 | 572.19 | |
✔ | 97.23 | 97.3 | 97.2 | 45.259 | ||
Coarse KNN | ✔ | 96.17 | 96.23 | 96.1 | 1290.8 | |
✔ | 96.37 | 96.4 | 96.3 | 585.42 | ||
Weighted KNN | ✔ | 96.27 | 96.27 | 96.2 | 1347.6 | |
✔ | 96.1 | 96.1 | 96.0 | 656.35 | ||
Bagged Trees | ✔ | 95.6 | 95.6 | 95.5 | 3765.9 | |
✔ | 95.33 | 95.3 | 95.2 | 1422.3 | ||
Subspace Discriminant | ✔ | 96.97 | 97 | 96.9 | 2999.2 | |
✔ | 96.6 | 96.67 | 96.6 | 575.46 | ||
Bilayered Neural Network | ✔ | 96.07 | 96.07 | 96.0 | 4155.3 | |
✔ | 95.8 | 95.83 | 95.8 | 2032.2 | ||
Trilayered Neural Network | ✔ | 96.3 | 96.27 | 96.2 | 4420.7 | |
✔ | 95.7 | 95.6 | 95.6 | 2049.1 |
Classifiers | Features | Recall (%) | Precision (%) | Accuracy (%) | Time (s) | |
---|---|---|---|---|---|---|
Fusion | Optimization | |||||
Fine tree | ✔ | 92.97 | 93 | 92.9 | 174.72 | |
✔ | 92.03 | 92.07 | 92.0 | 48.364 | ||
Medium tree | ✔ | 91.17 | 91.37 | 91.2 | 505.02 | |
✔ | 89.37 | 89.57 | 89.4 | 34.35 | ||
Linear SVM | ✔ | 98.4 | 98.37 | 98.4 | 498.59 | |
✔ | 97.77 | 97.77 | 97.7 | 204.978 | ||
Quadratic SVM | ✔ | 98.57 | 98.57 | 98.6 | 859.46 | |
✔ | 97.93 | 98 | 98.0 | 42.512 | ||
Coarse KNN | ✔ | 94.93 | 95.23 | 95.0 | 873.62 | |
✔ | 95.4 | 95.5 | 95.4 | 691.91 | ||
Weighted KNN | ✔ | 96.93 | 97.07 | 96.9 | 695.55 | |
✔ | 96.87 | 97 | 96.9 | 668.99 | ||
Bagged trees | ✔ | 96 | 96.03 | 96.0 | 2899.7 | |
✔ | 95.27 | 95.27 | 95.3 | 1573.6 | ||
Subspace discriminant | ✔ | 97.57 | 97.52 | 97.5 | 2049.8 | |
✔ | 96.5 | 96.57 | 96.5 | 621.48 | ||
Bilayered neural network | ✔ | 98.23 | 98.20 | 98.2 | 726.05 | |
✔ | 97.07 | 97.07 | 97.1 | 839.48 | ||
Trilayered neural network | ✔ | 98.27 | 98.27 | 98.3 | 600.06 | |
✔ | 97.17 | 97.2 | 97.2 | 1216.1 |
Classifiers | Features | Recall (%) | Precision (%) | Accuracy (%) | Time (s) | |
---|---|---|---|---|---|---|
Fusion | Optimization | |||||
Fine tree | ✔ | 90.17 | 90.23 | 90.2 | 175.99 | |
✔ | 89.57 | 89.57 | 89.6 | 30.051 | ||
Medium tree | ✔ | 87.4 | 87.33 | 87.4 | 95.118 | |
✔ | 86.8 | 86.83 | 86.8 | 23.989 | ||
Linear SVM | ✔ | 97.43 | 97.37 | 97.4 | 850.09 | |
✔ | 96.83 | 96.8 | 96.8 | 259.84 | ||
Quadratic SVM | ✔ | 97.67 | 97.63 | 97.7 | 1014.7 | |
✔ | 97.23 | 97.23 | 97.2 | 53.864 | ||
Coarse KNN | ✔ | 94.3 | 94.23 | 94.3 | 2152.5 | |
✔ | 93.8 | 93.73 | 93.8 | 699.39 | ||
Weighted KNN | ✔ | 95.67 | 95.57 | 95.6 | 2562.8 | |
✔ | 95.13 | 95.07 | 95.1 | 671.14 | ||
Bagged trees | ✔ | 95.17 | 95.13 | 95.1 | 5675.8 | |
✔ | 93.7 | 93.7 | 93.7 | 1827.5 | ||
Subspace discriminant | ✔ | 96.6 | 96.57 | 96.6 | 3608.6 | |
✔ | 95.07 | 95.03 | 95.1 | 805.07 | ||
Bilayered neural network | ✔ | 97.17 | 97.17 | 97.2 | 2644.7 | |
✔ | 96.07 | 96.07 | 96.1 | 1054 | ||
Trilayered neural network | ✔ | 97.1 | 97.1 | 97.1 | 3075.3 | |
✔ | 95.67 | 95.7 | 95.7 | 1131.5 |
Classifiers | Features | Recall (%) | Precision (%) | Accuracy (%) | Time (s) | |
---|---|---|---|---|---|---|
Fusion | Optimization | |||||
Fine tree | ✔ | 90 | 90 | 90.0 | 126.75 | |
✔ | 89.53 | 89.6 | 89.5 | 108.46 | ||
Medium tree | ✔ | 87 | 87 | 86.9 | 45.349 | |
✔ | 86.2 | 86.03 | 86.0 | 32.89 | ||
Linear SVM | ✔ | 96.17 | 96.13 | 96.1 | 835.99 | |
✔ | 95.77 | 95.8 | 95.7 | 290.16 | ||
Quadratic SVM | ✔ | 96.53 | 96.53 | 96.5 | 1020 | |
✔ | 96.27 | 96.27 | 96.2 | 320.93 | ||
Coarse KNN | ✔ | 93.23 | 93.47 | 93.2 | 1881.1 | |
✔ | 93.37 | 93.57 | 93.4 | 652.75 | ||
Weighted KNN | ✔ | 95.13 | 95.13 | 95.1 | 1889.2 | |
✔ | 95 | 95.03 | 95.0 | 913.86 | ||
Bagged trees | ✔ | 94.3 | 94.37 | 94.3 | 4780.4 | |
✔ | 93.63 | 93.67 | 93.6 | 1510.8 | ||
Subspace discriminant | ✔ | 95.67 | 95.67 | 95.7 | 3385 | |
✔ | 94.67 | 94.7 | 94.7 | 863.8 | ||
Bilayered neural network | ✔ | 96.13 | 96.13 | 96.2 | 2521 | |
✔ | 95.33 | 95.33 | 95.3 | 923.33 | ||
Trilayered neural network | ✔ | 96 | 96 | 96.0 | 2438.8 | |
✔ | 95.03 | 95 | 95.0 | 1013.3 |
Classifiers | Features | Recall (%) | Precision (%) | Accuracy (%) | Time (s) | |
---|---|---|---|---|---|---|
Fusion | Optimization | |||||
Fine tree | ✔ | 8.87 | 80.97 | 90.8 | 300.21 | |
✔ | 80.47 | 80.53 | 80.4 | 107.03 | ||
Medium tree | ✔ | 75.7 | 75.7 | 75.6 | 232.76 | |
✔ | 76.3 | 78.57 | 76.1 | 72.293 | ||
Linear SVM | ✔ | 86.93 | 87.17 | 86.9 | 1263.8 | |
✔ | 86 | 86.17 | 85.9 | 666.27 | ||
Quadratic SVM | ✔ | 87.6 | 87.57 | 87.5 | 1588.8 | |
✔ | 86.93 | 86.87 | 86.8 | 204.15 | ||
Coarse KNN | ✔ | 82.87 | 82.97 | 92.8 | 1324.9 | |
✔ | 83 | 82.9 | 92.9 | 791.35 | ||
Weighted KNN | ✔ | 85.8 | 85.77 | 85.7 | 1369.1 | |
✔ | 85.17 | 85.17 | 85.1 | 925.01 | ||
Bagged trees | ✔ | 85.03 | 85.07 | 85.0 | 2694.2 | |
✔ | 84.87 | 84.87 | 84.7 | 2772.8 | ||
Subspace discriminant | ✔ | 86.67 | 86.97 | 86.6 | 2415.2 | |
✔ | 85.37 | 85.53 | 85.3 | 1544.1 | ||
Bilayered neural network | ✔ | 85.57 | 85.57 | 85.0 | 3259.8 | |
✔ | 84.1 | 84.1 | 84.0 | 2850.5 | ||
Trilayered neural network | ✔ | 85.5 | 85.57 | 85.4 | 3645.4 | |
✔ | 84.43 | 84.47 | 84.4 | 2918 |
Classifiers | Features | Recall (%) | Precision (%) | Accuracy (%) | Time (s) | |
---|---|---|---|---|---|---|
Fusion | Optimization | |||||
Fine tree | ✔ | 88 | 88.43 | 88.0 | 1773.7 | |
✔ | 87.8 | 88.43 | 87.8 | 104.61 | ||
Medium tree | ✔ | 85.37 | 86.5 | 85.4 | 281.13 | |
✔ | 83.5 | 84.57 | 83.3 | 78.005 | ||
Linear SVM | ✔ | 92.73 | 93.33 | 92.7 | 1871.9 | |
✔ | 92.4 | 92.9 | 92.4 | 735.43 | ||
Quadratic SVM | ✔ | 93.73 | 94.03 | 93.7 | 1865.3 | |
✔ | 93.17 | 93.53 | 93.1 | 737.25 | ||
Coarse KNN | ✔ | 89.57 | 90.33 | 89.4 | 2268.8 | |
✔ | 89.33 | 90.13 | 89.2 | 553.79 | ||
Weighted KNN | ✔ | 92.4 | 92.57 | 92.4 | 2467.2 | |
✔ | 92.07 | 92.13 | 91.9 | 1280.4 | ||
Bagged trees | ✔ | 91.67 | 91.79 | 91.7 | 4960.7 | |
✔ | 91.5 | 91.73 | 91.5 | 1041.7 | ||
Subspace discriminant | ✔ | 92.47 | 93.33 | 92.5 | 4581.6 | |
✔ | 90.93 | 92.07 | 91.0 | 557.85 | ||
Bilayered neural network | ✔ | 92.67 | 92.67 | 92.6 | 3507.9 | |
✔ | 91.5 | 91.53 | 91.4 | 808.15 | ||
Trilayered neural network | ✔ | 92.9 | 92.93 | 92.8 | 4274.3 | |
✔ | 91.47 | 91.47 | 91.4 | 1225 |
Classifiers | Features | Recall (%) | Precision (%) | Accuracy (%) | Time (s) | |
---|---|---|---|---|---|---|
Fusion | Optimization | |||||
Fine tree | ✔ | 89.1 | 89.1 | 89.2 | 195.8 | |
✔ | 88.03 | 88.1 | 88.2 | 60.125 | ||
Medium tree | ✔ | 83.8 | 84.5 | 84.3 | 83.087 | |
✔ | 82.5 | 85 | 83.2 | 39.314 | ||
Linear SVM | ✔ | 94.3 | 94.33 | 94.4 | 1126.1 | |
✔ | 93.6 | 94.13 | 93.7 | 276.45 | ||
Quadratic SVM | ✔ | 94.57 | 94.63 | 94.7 | 1382 | |
✔ | 94.1 | 94.13 | 94.2 | 268.99 | ||
Coarse KNN | ✔ | 89.93 | 89.9 | 90.0 | 1970 | |
✔ | 90 | 89.97 | 90.1 | 418.15 | ||
Weighted KNN | ✔ | 92.57 | 95.6 | 92.7 | 2148.8 | |
✔ | 92.23 | 92.27 | 92.3 | 1020.5 | ||
Bagged trees | ✔ | 93.3 | 93.33 | 93.4 | 4736.1 | |
✔ | 92.8 | 92.8 | 92.9 | 1467.7 | ||
Subspace discriminant | ✔ | 93.87 | 93.97 | 94.0 | 3994.2 | |
✔ | 92.8 | 92.97 | 93.0 | 426.27 | ||
Bilayered neural network | ✔ | 93.87 | 93.9 | 93.9 | 3528.1 | |
✔ | 92.6 | 92.6 | 92.7 | 1079 | ||
Trilayered neural network | ✔ | 93.63 | 93.67 | 93.7 | 3639.6 | |
✔ | 92.53 | 92.57 | 92.6 | 1315.2 |
Classifiers | Features | Recall (%) | Precision (%) | Accuracy (%) | Time (s) | |
---|---|---|---|---|---|---|
Fusion | Optimization | |||||
Fine tree | ✔ | 84.87 | 85 | 84.8 | 468.07 | |
✔ | 83.9 | 83.93 | 83.9 | 25.116 | ||
Medium tree | ✔ | 83.03 | 85.07 | 82.8 | 346.5 | |
✔ | 81.9 | 84.03 | 81.7 | 65.492 | ||
Linear SVM | ✔ | 90.87 | 90.9 | 90.9 | 1243.8 | |
✔ | 89.73 | 89.7 | 89.7 | 254.83 | ||
Quadratic SVM | ✔ | 91.13 | 91.17 | 91.2 | 1450.2 | |
✔ | 90.33 | 90.37 | 90.4 | 338.84 | ||
Coarse KNN | ✔ | 87.27 | 87.63 | 87.3 | 1743.2 | |
✔ | 87.83 | 88 | 87.9 | 359.53 | ||
Weighted KNN | ✔ | 89.3 | 89.37 | 89.3 | 1733 | |
✔ | 88.37 | 88.43 | 88.4 | 399.24 | ||
Bagged trees | ✔ | 89 | 89.07 | 89.1 | 3541.7 | |
✔ | 88.5 | 88.5 | 88.6 | 1202.6 | ||
Subspace discriminant | ✔ | 90.4 | 90.47 | 90.4 | 2676.8 | |
✔ | 88.9 | 89.17 | 88.9 | 705.52 | ||
Bilayered neural network | ✔ | 90.23 | 90.23 | 90.3 | 2448.2 | |
✔ | 88.4 | 88.4 | 88.4 | 1268.9 | ||
Trilayered neural network | ✔ | 90.23 | 90.2 | 90.3 | 3238.2 | |
✔ | 88.5 | 88.5 | 88.6 | 1229.1 |
Classifiers | Features | Recall (%) | Precision (%) | Accuracy (%) | Time (s) | |
---|---|---|---|---|---|---|
Fusion | Optimization | |||||
Fine tree | ✔ | 87.43 | 87.47 | 87.4 | 126.29 | |
✔ | 86.5 | 86.6 | 86.5 | 96.952 | ||
Medium tree | ✔ | 84.33 | 85.1 | 84.3 | 987.03 | |
✔ | 84.1 | 84.83 | 84.1 | 173.73 | ||
Linear SVM | ✔ | 92.1 | 92.43 | 92.1 | 2679.4 | |
✔ | 91.27 | 91.73 | 91.3 | 296.31 | ||
Quadratic SVM | ✔ | 92.37 | 92.6 | 92.4 | 3202.6 | |
✔ | 91.9 | 92.17 | 91.9 | 327.22 | ||
Coarse KNN | ✔ | 87.93 | 88.9 | 87.9 | 3544.2 | |
✔ | 88.47 | 89.37 | 88.5 | 485.17 | ||
Weighted KNN | ✔ | 90.47 | 90.6 | 90.5 | 3531.9 | |
✔ | 90.43 | 90.53 | 90.4 | 505.14 | ||
Bagged trees | ✔ | 90.77 | 90.8 | 90.8 | 7531.4 | |
✔ | 90.43 | 90.37 | 90.3 | 1321.3 | ||
Subspace discriminant | ✔ | 91.4 | 91.63 | 91.4 | 1477.2 | |
✔ | 90.43 | 90.8 | 90.4 | 676.45 | ||
Bilayered neural network | ✔ | 91.2 | 91.2 | 91.2 | 10934 | |
✔ | 90.17 | 90.2 | 90.2 | 1393.7 | ||
Trilayered neural network | ✔ | 91.27 | 9127 | 91.3 | 1674.1 | |
✔ | 89.73 | 89.7 | 89.7 | 1460.7 |
Classifiers | Features | Recall (%) | Precision (%) | Accuracy (%) | Time (s) | |
---|---|---|---|---|---|---|
Fusion | Optimization | |||||
Fine tree | ✔ | 92.27 | 92.3 | 92.3 | 52.63 | |
✔ | 92.33 | 92.53 | 92.4 | 68.363 | ||
Medium tree | ✔ | 90.1 | 90.4 | 90.1 | 32.703 | |
✔ | 89.27 | 89.93 | 89.4 | 143.29 | ||
Linear SVM | ✔ | 96.23 | 96.4 | 96.2 | 342.89 | |
✔ | 96 | 96.13 | 96.0 | 55.835 | ||
Quadratic SVM | ✔ | 96.47 | 96.57 | 96.5 | 432.39 | |
✔ | 96.23 | 96.4 | 96.3 | 240.25 | ||
Coarse KNN | ✔ | 93.8 | 94.37 | 93.8 | 882.19 | |
✔ | 93.9 | 94.4 | 93.9 | 353.4 | ||
Weighted KNN | ✔ | 95.5 | 95.57 | 95.5 | 807.78 | |
✔ | 95.43 | 95.53 | 95.4 | 384.97 | ||
Bagged trees | ✔ | 95.47 | 95.5 | 95.5 | 126.7 | |
✔ | 94.8 | 94.8 | 94.8 | 1053.6 | ||
Subspace discriminant | ✔ | 96 | 96.07 | 96.0 | 1254.6 | |
✔ | 95.2 | 95.37 | 95.2 | 553.05 | ||
Bilayered neural network | ✔ | 96.03 | 96.03 | 96.03 | 389.06 | |
✔ | 95.37 | 95.37 | 95.4 | 798.62 | ||
Trilayered neural network | ✔ | 96.1 | 96.1 | 96.1 | 281.9 | |
✔ | 95.37 | 95.33 | 95.4 | 848.98 |
Classifiers | Features | Recall (%) | Precision (%) | Accuracy (%) | Time (s) | |
---|---|---|---|---|---|---|
Fusion | Optimization | |||||
Fine tree | ✔ | 97.5 | 97.57 | 97.5 | 140.6 | |
✔ | 97.2 | 97.17 | 97.2 | 106.4 | ||
Medium tree | ✔ | 96.7 | 96.7 | 96.7 | 88.322 | |
✔ | 95.57 | 95.57 | 95.6 | 46.451 | ||
Linear SVM | ✔ | 99.87 | 99.87 | 99.9 | 243.19 | |
✔ | 99.73 | 99.73 | 99.8 | 20.143 | ||
Quadratic SVM | ✔ | 99.83 | 99.87 | 99.9 | 240.29 | |
✔ | 99.8 | 99.77 | 99.8 | 113.25 | ||
Coarse KNN | ✔ | 99.33 | 99.37 | 99.4 | 1815.3 | |
✔ | 99.07 | 99.03 | 99.1 | 164.76 | ||
Weighted KNN | ✔ | 99.73 | 99.73 | 99.7 | 1883.6 | |
✔ | 99.6 | 99.57 | 99.6 | 168.63 | ||
Bagged trees | ✔ | 99.07 | 99.1 | 99.1 | 2476.1 | |
✔ | 98.87 | 98.87 | 98.9 | 338.02 | ||
Subspace discriminant | ✔ | 99.63 | 99.63 | 99.6 | 1925.6 | |
✔ | 99.47 | 99.47 | 99.5 | 112.38 | ||
Bilayered neural network | ✔ | 99.77 | 99.77 | 99.8 | 237.92 | |
✔ | 99.7 | 99.7 | 99.7 | 28.024 | ||
Trilayered neural network | ✔ | 99.8 | 99.83 | 99.8 | 289.07 | |
✔ | 99.67 | 99.7 | 99.7 | 34.53 |
Classifiers | 0 | 18 | 36 | 54 | 72 | 90 | 108 | 126 | 144 | 162 | 180 |
---|---|---|---|---|---|---|---|---|---|---|---|
Fine tree | 94.6 | 92.0 | 89.6 | 89.5 | 80.4 | 87.8 | 88.2 | 83.9 | 86.5 | 92.4 | 97.2 |
Medium tree | 93.0 | 89.4 | 86.8 | 86.0 | 76.1 | 83.3 | 83.2 | 81.7 | 84.1 | 89.4 | 95.6 |
Linear SVM | 97.1 | 97.7 | 96.8 | 95.7 | 85.9 | 92.4 | 93.7 | 89.7 | 91.3 | 96.0 | 99.8 |
Quadratic SVM | 97.2 | 98.0 | 97.2 | 96.2 | 86.8 | 93.1 | 94.2 | 90.4 | 91.9 | 96.3 | 99.8 |
Coarse KNN | 96.3 | 95.4 | 93.8 | 93.4 | 92.9 | 89.2 | 90.1 | 87.9 | 88.5 | 93.9 | 99.1 |
Weighted KNN | 96.0 | 96.9 | 95.1 | 95.0 | 85.1 | 91.9 | 92.3 | 88.4 | 90.4 | 95.4 | 99.6 |
Bagged trees | 95.2 | 95.3 | 93.7 | 93.6 | 84.7 | 91.5 | 92.9 | 88.6 | 90.3 | 94.8 | 98.9 |
Subspace discriminant | 96.6 | 96.5 | 95.1 | 94.7 | 85.3 | 91.0 | 93.0 | 88.9 | 90.4 | 95.2 | 99.5 |
Bilayered NN | 95.8 | 97.1 | 96.1 | 95.3 | 84.0 | 91.4 | 92.7 | 88.4 | 90.2 | 95.4 | 99.7 |
Trilayered NN | 95.6 | 97.2 | 95.7 | 95.0 | 84.4 | 91.4 | 92.6 | 88.6 | 89.7 | 95.4 | 99.7 |
Method | Angles | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
0 | 18 | 36 | 54 | 72 | 90 | 108 | 126 | 144 | 162 | 180 | |
[30] 2022 | 95.2 | 93.9 | - | - | - | - | - | - | - | - | 98.2 |
[31] 2022 | 97.0 | 97.9 | - | - | 97.2 | - | - | - | - | - | 96.0 |
[32] 2022 | 92.1 | 96.1 | - | 95.7 | - | 93 | - | - | - | 94.87 | 91.33 |
[33] 2022 | - | - | - | - | 98.3 | - | - | - | - | 94.90 | 98.6 |
[34] 2020 | - | 94.3 | 93.8 | 94.7 | - | - | - | - | - | - | - |
[35] 2019 | 98.8 | 95.6 | 96.3 | 91.9 | 94.0 | 95.2 | 94.6 | 95.4 | 90.4 | 93.00 | 95.1 |
PROPOSED | 97.2 | 98.0 | 97.2 | 96.2 | 92.8 | 93.1 | 94.2 | 90.4 | 91.9 | 96.3 | 99.8 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Jahangir, F.; Khan, M.A.; Alhaisoni, M.; Alqahtani, A.; Alsubai, S.; Sha, M.; Al Hejaili, A.; Cha, J.-h. A Fusion-Assisted Multi-Stream Deep Learning and ESO-Controlled Newton–Raphson-Based Feature Selection Approach for Human Gait Recognition. Sensors 2023, 23, 2754. https://doi.org/10.3390/s23052754
Jahangir F, Khan MA, Alhaisoni M, Alqahtani A, Alsubai S, Sha M, Al Hejaili A, Cha J-h. A Fusion-Assisted Multi-Stream Deep Learning and ESO-Controlled Newton–Raphson-Based Feature Selection Approach for Human Gait Recognition. Sensors. 2023; 23(5):2754. https://doi.org/10.3390/s23052754
Chicago/Turabian StyleJahangir, Faiza, Muhammad Attique Khan, Majed Alhaisoni, Abdullah Alqahtani, Shtwai Alsubai, Mohemmed Sha, Abdullah Al Hejaili, and Jae-hyuk Cha. 2023. "A Fusion-Assisted Multi-Stream Deep Learning and ESO-Controlled Newton–Raphson-Based Feature Selection Approach for Human Gait Recognition" Sensors 23, no. 5: 2754. https://doi.org/10.3390/s23052754
APA StyleJahangir, F., Khan, M. A., Alhaisoni, M., Alqahtani, A., Alsubai, S., Sha, M., Al Hejaili, A., & Cha, J. -h. (2023). A Fusion-Assisted Multi-Stream Deep Learning and ESO-Controlled Newton–Raphson-Based Feature Selection Approach for Human Gait Recognition. Sensors, 23(5), 2754. https://doi.org/10.3390/s23052754