CSI-Based Human Activity Recognition Using Multi-Input Multi-Output Autoencoder and Fine-Tuning
<p>Simple explanation of trained MIMO-AE and relation of inputs and extracted features.</p> "> Figure 2
<p>MIMO-AE structure used in this paper, including array sizes and selected hyperparameters and functions.</p> "> Figure 3
<p>General concept of transfer learning/fine-tuning, based on using pretrained models in the training process of a secondary model.</p> "> Figure 4
<p>A simple presentation of the classifier without using any pretrained layer.</p> "> Figure 5
<p>Classifier structure used in this paper, including array sizes and selected hyperparameters and functions.</p> "> Figure 6
<p>Description of time distributed function, specialized for keeping the order in data.</p> "> Figure 7
<p>Randomly splitting process of selected dataset, for creating the training, validation, and test (evaluation) datasets.</p> "> Figure 8
<p>A simple presentation of the classifier included by the retrainable encoder, which is the only difference, compared to <a href="#sensors-23-03591-f004" class="html-fig">Figure 4</a>.</p> "> Figure 9
<p>A simple presentation of the classifier included by the untrainable encoder, which is the only difference, compared to <a href="#sensors-23-03591-f004" class="html-fig">Figure 4</a>.</p> "> Figure 10
<p>A simple presentation of classifier included by the untrained encoder, which is the only difference, compared to <a href="#sensors-23-03591-f004" class="html-fig">Figure 4</a>.</p> "> Figure 11
<p>A block diagram of the proposed methods.</p> "> Figure 12
<p>Accuracy of different methods that include MIMO-AE and without autoencoder, implemented on 50% of the training dataset.</p> "> Figure 13
<p>Confusion matrices of proposed methods, using 50% of the used data (including training and validation data): (<b>a</b>) designed classifier; (<b>b</b>) using retrainable encoder; (<b>c</b>) using untrainable encoder; (<b>d</b>) using the untrained encoder.</p> "> Figure 14
<p>Confusion matrices of previous methods using 50% of the used data (including 80% of total data for training and validation data): (<b>a</b>) 2D-CNN [<a href="#B19-sensors-23-03591" class="html-bibr">19</a>]; (<b>b</b>) LSTM [<a href="#B19-sensors-23-03591" class="html-bibr">19</a>].</p> ">
Abstract
:1. Introduction
- The use of multi-input multi-output autoencoder for extracting targeted information in CSI data.
- Modifying models to extract more information from a limited number of samples, and therefore, training models with high accuracy, using small data size.
- Reducing the complexity of the autoencoder-based CNN HAR models while keeping the accuracy at an acceptable level.
2. System Method
2.1. Channel State Information
2.2. Autoencoder
2.3. MIMO-AE
2.4. Fine-Tuning
2.5. Classifier Included by the Encoder
2.5.1. Model a: Designed Classifier
2.5.2. Model b: Retrainable Encoder
2.5.3. Model c: Untrainable Encoder
2.5.4. Model d: Untrained Encoder
3. Results and Discussion
3.1. Human Activity Recognition Dataset
3.2. Comparison of Proposed Models
3.3. Comparing with Others
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Liu, J.; Liu, H.; Chen, Y.; Wang, Y.; Wang, C. Wireless Sensing for Human Activity: A Survey. IEEE Commun. Surv. Tutor. 2019, 22, 1629–1645. [Google Scholar] [CrossRef]
- Nabati, M.; Ghorashi, S.A.; Shahbazian, R. Joint Coordinate Optimization in Fingerprint-Based Indoor Positioning. IEEE Commun. Lett. 2020, 25, 1192–1195. [Google Scholar] [CrossRef]
- Cui, W.; Li, B.; Zhang, L.; Chen, Z. Device-free single-user activity recognition using diversified deep ensemble learning. Appl. Soft Comput. 2021, 102, 107066. [Google Scholar] [CrossRef]
- Wang, X.; Yang, C.; Mao, S. ResBeat: Resilient Breathing Beats Monitoring with Realtime Bimodal CSI Data. In Proceedings of the GLOBECOM 2017 IEEE Global Communications Conference, Singapore, 4–8 December 2017; pp. 1–6. [Google Scholar] [CrossRef]
- Xu, Y.; Yang, W.; Chen, M.; Chen, S.; Huang, L. Attention-Based Gait Recognition and Walking Direction Estimation in Wi-Fi Networks. IEEE Trans. Mob. Comput. 2020, 21, 465–479. [Google Scholar] [CrossRef]
- Hindawi. A Framework for Human Activity Recognition Based on WiFi CSI Signal Enhancement. Available online: https://www.hindawi.com/journals/ijap/2021/6654752/ (accessed on 15 September 2022).
- Shalaby, E.; ElShennawy, N.; Sarhan, A. Utilizing deep learning models in CSI-based human activity recognition. Neural Comput. Appl. 2022, 34, 5993–6010. [Google Scholar] [CrossRef]
- Dua, N.; Singh, S.N.; Challa, S.K.; Semwal, V.B. A Survey on Human Activity Recognition Using Deep Learning Techniques and Wearable Sensor Data. In Proceedings of the International Conference on Machine Learning, Image Processing, Network Security and Data Sciences, Virtual, 21–22 December 2022; Springer: Berlin/Heidelberg, Germany, 2023. Available online: https://link.springer.com/chapter/10.1007/978-3-031-24352-3_5 (accessed on 15 March 2023).
- Zou, H.; Zhou, Y.; Yang, J.; Jiang, H.; Xie, L.; Spanos, C.J. DeepSense: Device-free Human Activity Recognition via Autoencoder Long-Term Recurrent Convolutional Network. In Proceedings of the 2018 IEEE International Conference on Communications (ICC), Kansas City, MO, USA, 20–24 May 2018; pp. 1–6. [Google Scholar] [CrossRef]
- Guo, L.; Zhang, H.; Wang, C.; Guo, W.; Diao, G.; Lu, B.; Lin, C.; Wang, L. Towards CSI-based diversity activity recognition via LSTM-CNN encoder-decoder neural network. Neurocomputing 2020, 444, 260–273. [Google Scholar] [CrossRef]
- Hindawi. A Deep Learning-Based Framework for Human Activity Recognition in Smart Homes. Available online: https://www.hindawi.com/journals/misy/2021/6961343/ (accessed on 12 September 2022).
- Vrskova, R.; Kamencay, P.; Hudec, R.; Sykora, P. A New Deep-Learning Method for Human Activity Recognition. Sensors 2023, 23, 2816. [Google Scholar] [CrossRef] [PubMed]
- Li, J.; Xu, H.; Wang, Y. Multi-resolution Fusion Convolutional Network for Open Set Human Activity Recognition. IEEE Internet Things J. 2023. Early Access. [Google Scholar] [CrossRef]
- Helmi, A.M.; Al-Qaness, M.A.; Dahou, A.; Elaziz, M.A. Human activity recognition using marine predators algorithm with deep learning. Futur. Gener. Comput. Syst. 2023, 142, 340–350. [Google Scholar] [CrossRef]
- Nabati, M.; Navidan, H.; Shahbazian, R.; Ghorashi, S.A.; Windridge, D. Using Synthetic Data to Enhance the Accuracy of Fingerprint-Based Localization: A Deep Learning Approach. IEEE Sensors Lett. 2020, 4, 6000204. [Google Scholar] [CrossRef]
- Wang, D.; Yang, J.; Cui, W.; Xie, L.; Sun, S. Multimodal CSI-Based Human Activity Recognition Using GANs. IEEE Internet Things J. 2021, 8, 17345–17355. [Google Scholar] [CrossRef]
- Challenges and Corresponding Solutions of Generative Adversarial Networks (GANs): A Survey Study—IOPscience. Available online: https://iopscience.iop.org/article/10.1088/1742-6596/1827/1/012066/meta (accessed on 5 December 2022).
- Prabono, A.G.; Yahya, B.N.; Lee, S.-L. Atypical Sample Regularizer Autoencoder for Cross-Domain Human Activity Recognition. Inf. Syst. Front. 2020, 23, 71–80. [Google Scholar] [CrossRef]
- Moshiri, P.F.; Shahbazian, R.; Nabati, M.; Ghorashi, S.A. A CSI-Based Human Activity Recognition Using Deep Learning. Sensors 2021, 21, 7225. [Google Scholar] [CrossRef] [PubMed]
- Cheng, X.; Huang, B.; Zong, J. Device-Free Human Activity Recognition Based on GMM-HMM Using Channel State Information. IEEE Access 2021, 9, 76592–76601. [Google Scholar] [CrossRef]
- Fang, Y.; Xiao, F.; Sheng, B.; Sha, L.; Sun, L. Cross-scene passive human activity recognition using commodity WiFi. Front. Comput. Sci. 2021, 16, 161502. [Google Scholar] [CrossRef]
- Su, J.; Liao, Z.; Sheng, Z.; Liu, A.X.; Singh, D.; Lee, H.-N. Human Activity Recognition Using Self-powered Sensors Based on Multilayer Bi-directional Long Short-Term Memory Networks. IEEE Sens. J. 2022; Early Access. [Google Scholar] [CrossRef]
- Yousefi, S.; Narui, H.; Dayal, S.; Ermon, S.; Valaee, S. A Survey on Behavior Recognition Using WiFi Channel State Information. IEEE Commun. Mag. 2017, 55, 98–104. [Google Scholar] [CrossRef]
- Kabir, M.H.; Rahman, M.H.; Shin, W. CSI-IANet: An Inception Attention Network for Human-Human Interaction Recognition Based on CSI Signal. IEEE Access 2021, 9, 166624–166638. [Google Scholar] [CrossRef]
- Ng, H.-W.; Nguyen, V.D.; Vonikakis, V.; Winkler, S. Deep Learning for Emotion Recognition on Small Datasets Using Transfer Learning. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, New York, NY, USA, 9 November 2015; pp. 443–449. [Google Scholar]
- Geng, C.; Huang, H.; Langerman, J. Multipoint Channel Charting with Multiple-Input Multiple-Output Convolutional Autoencoder. In Proceedings of the 2020 IEEE/ION Position, Location and Navigation Symposium (PLANS), Portland, OR, USA, 20–23 April 2020; pp. 1022–1028. [Google Scholar] [CrossRef]
- Hernandez, N.; Lundström, J.; Favela, J.; McChesney, I.; Arnrich, B. Literature Review on Transfer Learning for Human Activity Recognition Using Mobile and Wearable Devices with Environmental Technology. SN Comput. Sci. 2020, 1, 66. [Google Scholar] [CrossRef] [Green Version]
- Shah, B.; Bhavsar, H. Time Complexity in Deep Learning Models. Procedia Comput. Sci. 2022, 215, 202–210. [Google Scholar] [CrossRef]
- Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
- Thakur, D.; Biswas, S.; Ho, E.S.L.; Chattopadhyay, S. ConvAE-LSTM: Convolutional Autoencoder Long Short-Term Memory Network for Smartphone-Based Human Activity Recognition. IEEE Access 2022, 10, 4137–4156. [Google Scholar] [CrossRef]
- Wang, T.; Chen, Y.; Zhang, M.; Chen, J.; Snoussi, H. Internal Transfer Learning for Improving Performance in Human Action Recognition for Small Datasets. IEEE Access 2017, 5, 17627–17633. [Google Scholar] [CrossRef]
- Khalid, H.; Tariq, S.; Kim, T.; Ko, J.H.; Woo, S.S. ORVAE: One-Class Residual Variational Autoencoder for Voice Activity Detection in Noisy Environment. Neural Process. Lett. 2022, 54, 1565–1586. [Google Scholar] [CrossRef]
- Chawla, N.V.; Bowyer, K.W.; Hall, L.O.; Kegelmeyer, W.P. SMOTE: Synthetic Minority Over-sampling Technique. J. Artif. Intell. Res. 2002, 16, 321–357. [Google Scholar] [CrossRef]
Percentage of Used Dataset for Training and Validation | Retrainable Encoder + Classifier (Model b) | Untrained Encoder + Classifier (Model d) | Untrainable Encoder + Classifier (Model c) | Designed Classifier (without Encoder) (Model a) |
---|---|---|---|---|
10% | 37 | 20.98 | 39.79 | 39.90 |
20% | 68.2 | 45.99 | 62.65 | 57.34 |
30% | 87.27 | 72.58 | 75.08 | 62.94 |
40% | 93.21 | 81.47 | 79.3 | 69.53 |
50% | 94.49 | 72.77 | 81.72 | 73.02 |
60% | 93.5 | 91.65 | 83.22 | 80.02 |
70% | 95.87 | 81.94 | 87.27 | 81 |
80% | 96.75 | 94.6 | 87.96 | 80.5 |
Retrainable Encoder + Classifier (Model b) | Untrained Encoder + Classifier (Model d) | Untrainable Encoder + Classifier (Model c) | Designed Classifier (without Encoder) (Model a) | |
---|---|---|---|---|
Total parameters | 2,227,131 | 2,227,131 | 2,227,131 | 11,659 |
Trained parameters | 2,227,107 | 2,227,107 | 12,235 | 11,635 |
Untrained Parameters | 24 | 24 | 2,214,896 | 24 |
Percentage of Used Dataset for Training and Validation | Retrainable Encoder + Classifier (Model b) (Train/Test) | Untrainable Encoder + Classifier (Model c) (Train/Test) | Designed Classifier (without Encoder) (Model a) (Train/Test) |
---|---|---|---|
10% | 512.27, 7.62 | 210.22, 7.58 | 54.05, 0.6 |
20% | 854.6, 1.03 | 284.14, 1.03 | 44.0, 0.46 |
30% | 1225.54, 1.23 | 336.28, 1.03 | 49.37, 0.52 |
40% | 1593.31, 1.06 | 431.77, 1.4 | 59.47, 0.52 |
50% | 1951.65, 1.11 | 515.32, 1.14 | 64.04, 0.44 |
Percentage of Used Dataset for Training and Validation | R2 Score (Retrainable/Untrainable) | F1 Score (Retrainable/Untrainable) | Precision (Retrainable/Untrainable) | Recall (Retrainable/Untrainable) |
---|---|---|---|---|
10% | 0.1, -0.42 | 0.33, 0.33 | 49.7, 41.74 | 38.08, 38.08 |
20% | 0.62, 0.23 | 0.65, 0.59 | 71.12, 66.25 | 65.84, 61.91 |
30% | 0.84, 0.48 | 0.88, 0.68 | 88.43, 68.82 | 88.45, 68.30 |
40% | 0.89, 0.7 | 0.92, 0.8 | 92.33, 80.12 | 92.13, 80.58 |
50% | 0.9, 0.69 | 0.94, 0.8 | 94.6, 82.19 | 94.59, 81.08 |
Percentage of Used Dataset for Training and Validation | Retrainable Encoder + Classifier (Simple AE) | Untrainable Encoder + Classifier (Simple AE) | Retrainable Encoder + Classifier (MIMO-AE) | Untrainable Encoder + Classifier (MIMO-AE) |
---|---|---|---|---|
10% | 35.87 | 31.7 | 37 | 39.79 |
20% | 64.37 | 48.16 | 68.2 | 62.65 |
30% | 84.52 | 53.56 | 87.27 | 75.08 |
40% | 91.4 | 52.09 | 93.21 | 79.3 |
50% | 92.32 | 66.34 | 94.49 | 81.72 |
60% | 91.56 | 68.3 | 93.5 | 83.22 |
70% | 93.33 | 65.6 | 95.87 | 87.27 |
80% | 94.08 | 77.89 | 96.75 | 87.96 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chahoushi, M.; Nabati, M.; Asvadi, R.; Ghorashi, S.A. CSI-Based Human Activity Recognition Using Multi-Input Multi-Output Autoencoder and Fine-Tuning. Sensors 2023, 23, 3591. https://doi.org/10.3390/s23073591
Chahoushi M, Nabati M, Asvadi R, Ghorashi SA. CSI-Based Human Activity Recognition Using Multi-Input Multi-Output Autoencoder and Fine-Tuning. Sensors. 2023; 23(7):3591. https://doi.org/10.3390/s23073591
Chicago/Turabian StyleChahoushi, Mahnaz, Mohammad Nabati, Reza Asvadi, and Seyed Ali Ghorashi. 2023. "CSI-Based Human Activity Recognition Using Multi-Input Multi-Output Autoencoder and Fine-Tuning" Sensors 23, no. 7: 3591. https://doi.org/10.3390/s23073591
APA StyleChahoushi, M., Nabati, M., Asvadi, R., & Ghorashi, S. A. (2023). CSI-Based Human Activity Recognition Using Multi-Input Multi-Output Autoencoder and Fine-Tuning. Sensors, 23(7), 3591. https://doi.org/10.3390/s23073591