Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

METIER: A Deep Multi-Task Learning Based Activity and User Recognition Model Using Wearable Sensors

Published: 18 March 2020 Publication History

Abstract

Activity recognition (AR) and user recognition (UR) using wearable sensors are two key tasks in ubiquitous and mobile computing. Currently, they still face some challenging problems. For one thing, due to the variations in how users perform activities, the performance of a well-trained AR model typically drops on new users. For another, existing UR models are powerless to activity changes, as there are significant differences between the sensor data in different activity scenarios. To address these problems, we propose METIER (deep multi-task learning based activity and user recognition) model, which solves AR and UR tasks jointly and transfers knowledge across them. User-related knowledge from UR task helps AR task to model user characteristics, and activity-related knowledge from AR task guides UR task to handle activity changes. METIER softly shares parameters between AR and UR networks, and optimizes these two networks jointly. The commonalities and differences across tasks are exploited to promote AR and UR tasks simultaneously. Furthermore, mutual attention mechanism is introduced to enable AR and UR tasks to exploit their knowledge to highlight important features for each other. Experiments are conducted on three public datasets, and the results show that our model can achieve competitive performance on both tasks.

References

[1]
Oresti Baños, Miguel Damas, Héctor Pomares, Ignacio Rojas, Máté Attila Tóth, and Oliver Amft. 2012. A benchmark dataset to evaluate sensor displacement in activity recognition. In Proceedings of the 14th ACM Conference on Ubiquitous Computing. ACM, 1026--1035.
[2]
Ling Bao and Stephen S Intille. 2004. Activity recognition from user-annotated acceleration data. In Proceedings of the 2nd International Conference on Pervasive Computing. Springer, 1--17.
[3]
Rich Caruana. 1997. Multitask learning. Machine Learning 28, 1 (1997), 41--75.
[4]
Sangil Choi, Ik-Hyun Youn, Richelle LeMay, Scott Burns, and Jong-Hoon Youn. 2014. Biometric gait recognition based on wireless acceleration sensor using k-nearest neighbor classification. In Proceedings of the 3rd International Conference on Computing, Networking and Communications. IEEE, 1091--1095.
[5]
James E Cutting and Lynn T Kozlowski. 1977. Recognizing friends by their walk: Gait perception without familiarity cues. Bulletin of the Psychonomic Society 9, 5 (1977), 353--356.
[6]
Omid Dehzangi, Mojtaba Taherisadr, and Raghvendar ChangalVala. 2017. IMU-based gait recognition using convolutional neural networks and multi-sensor fusion. Sensors 17, 12 (2017), 2735.
[7]
Gonzalo Bailador Del Pozo, Carmen Sánchez Á vila, Alberto de Santos Sierra, and Javier Guerra Casanova. 2012. Speed-independent gait identification for mobile devices. International Journal of Pattern Recognition and Artificial Intelligence 26, 8 (2012), 1260013011--26001313.
[8]
Mohammad Omar Derawi, Claudia Nickel, Patrick Bours, and Christoph Busch. 2010. Unobtrusive user-authentication on mobile phones using biometric gait recognition. In Proceedings of the 6th International Conference on Intelligent Information Hiding and Multimedia Signal Processing. IEEE, 306--311.
[9]
Seham Abd Elkader, Michael Barlow, and Erandi Lakshika. 2018. Wearable sensors for recognizing individuals undertaking daily activities. In Proceedings of the 22nd ACM International Symposium on Wearable Computers. ACM, 64--67.
[10]
Matteo Gadaleta and Michele Rossi. 2018. IDNet: Smartphone-based gait recognition with convolutional neural networks. Pattern Recognition 74 (2018), 25--37.
[11]
Haodong Guo, Ling Chen, Liangying Peng, and Gencai Chen. 2016. Wearable sensor based multimodal human activity recognition exploiting the diversity of classifier ensemble. In Proceedings of the 18th ACM International Joint Conference on Pervasive and Ubiquitous Computing. ACM, 1112--1123.
[12]
Haodong Guo, Ling Chen, Yanbin Shen, and Gencai Chen. 2014. Activity recognition exploiting classifier level fusion of acceleration and physiological signals. In Proceedings of the 16th ACM International Joint Conference on Pervasive and Ubiquitous Computing. ACM, 63--66.
[13]
Nils Y Hammerla, Shane Halloran, and Thomas Plötz. 2016. Deep, convolutional, and recurrent models for human activity recognition using wearables. In Proceedings of the 25th International Joint Conference on Artificial Intelligence. AAAI, 1533--1540.
[14]
Jin-Hyuk Hong, Julian Ramos, and Anind K Dey. 2016. Toward personalized activity recognition systems with a semipopulation approach. IEEE Transactions on Human-Machine Systems 46, 1 (2016), 101--112.
[15]
Yu-Jin Hong, Ig-Jae Kim, Sang Chul Ahn, and Hyoung-Gon Kim. 2010. Mobile health monitoring system based on activity recognition using accelerometer. Simulation Modelling Practice and Theory 18, 4 (2010), 446--455.
[16]
Verne Thompson Inman, Henry James Ralston, and Frank Todd. 1981. Human Walking. Williams & Wilkins.
[17]
Yusuke Iwasawa, Kotaro Nakayama, Ikuko Eguchi Yairi, and Yutaka Matsuo. 2017. Privacy issues regarding the application of DNNs to activity-recognition using wearables and its countermeasures by use of adversarial training. In Proceedings of the 26th International Joint Conference on Artificial Intelligence. AAAI, 1930--1936.
[18]
Aftab Khan, Sebastian Mellor, Eugen Berlin, Robin Thompson, Roisin McNaney, Patrick Olivier, and Thomas Plötz. 2015. Beyond activity recognition: Skill assessment from accelerometer data. In Proceedings of the 17th ACM International Joint Conference on Pervasive and Ubiquitous Computing. ACM, 1155--1166.
[19]
Md Abdullah Al Hafiz Khan, Nirmalya Roy, and Archan Misra. 2018. Scaling human activity recognition via deep learning-based domain adaptation. In Proceedings of the 16th IEEE International Conference on Pervasive Computing and Communications. IEEE, 1--9.
[20]
Oscar D Lara, Alfredo J Pérez, Miguel A Labrador, and José D Posada. 2012. Centinela: A human activity recognition system based on acceleration and vital sign data. Pervasive and Mobile Computing 8, 5 (2012), 717--729.
[21]
Shinya Matsui, Nakamasa Inoue, Yuko Akagi, Goshu Nagino, and Koichi Shinoda. 2017. User adaptation of convolutional neural network for human activity recognition. In Proceedings of the 25th European Signal Processing Conference. IEEE, 753--757.
[22]
Daniela Micucci, Marco Mobilio, and Paolo Napoletano. 2017. Unimib shar: A dataset for human activity recognition using acceleration data from smartphones. Applied Sciences 7, 10 (2017), 1101.
[23]
Volodymyr Mnih, Nicolas Heess, Alex Graves, and Koray Kavukcuoglu. 2014. Recurrent models of visual attention. In Proceedings of the 27th International Conference on Neural Information Processing Systems. MIT Press, 2204--2212.
[24]
Sebastian Münzner, Philip Schmidt, Attila Reiss, Michael Hanselmann, Rainer Stiefelhagen, and Robert Dürichen. 2017. CNN-based sensor fusion techniques for multimodal human activity recognition. In Proceedings of the 21st ACM International Symposium on Wearable Computers. ACM, 158--165.
[25]
Claudia Nickel, Christoph Busch, Sathyanarayanan Rangarajan, and Manuel Möbius. 2011. Using hidden markov models for accelerometer-based biometric gait recognition. In Proceedings of the 7th IEEE International Colloquium on Signal Processing and Its Applications. IEEE, 58--63.
[26]
Guillaume Obozinski, Ben Taskar, and Michael Jordan. 2006. Multi-task feature selection. Statistics Department, UC Berkeley, Technical Report 2 (2006).
[27]
Francisco Javier Ordóñez and Daniel Roggen. 2016. Deep convolutional and LSTM recurrent neural networks for multimodal wearable activity recognition. Sensors 16, 1 (2016), 1--25.
[28]
Ioannis Papavasileiou, Savanna Smith, Jinbo Bi, and Song Han. 2017. Gait-based continuous authentication using multimodal learning. In Proceedings of the 2nd IEEE/ACM International Conference on Connected Health: Applications, Systems and Engineering Technologies. IEEE, 290--291.
[29]
Liangying Peng, Ling Chen, Xiaojie Wu, Haodong Guo, and Gencai Chen. 2017. Hierarchical complex activity representation and recognition using topic model and classifier level fusion. IEEE Transactions on Biomedical Engineering 64, 6 (2017), 1369--1379.
[30]
Liangying Peng, Ling Chen, Zhenan Ye, and Yi Zhang. 2018. AROMA: A deep multi-task learning based simple and complex human activity recognition method using wearable sensors. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 2, 2 (2018), 74.
[31]
REALDISP dataset. Available online: https://archive.ics.uci.edu/ml/datasets/REALDISP+Activity+Recognition+Dataset
[32]
Attila Reiss and Didier Stricker. 2013. Personalized mobile physical activity recognition. In Proceedings of the 17th Annual International Symposium on Wearable Computers. ACM, 25--28.
[33]
Jorge-L Reyes-Ortiz, Luca Oneto, Albert Samà, Xavier Parra, and Davide Anguita. 2016. Transition-aware human activity recognition using smartphones. Neurocomputing 171 (2016), 754--767.
[34]
Seyed Ali Rokni, Marjan Nourollahi, and Hassan Ghasemzadeh. 2018. Personalized human activity recognition using convolutional neural networks. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence. AAAI, 8143--8144.
[35]
Liu Rong, Duan Zhiguo, Zhou Jianzhong, and Liu Ming. 2007. Identification of individual walking patterns using gait acceleration. In Proceedings of the 1st International Conference on Bioinformatics and Biomedical Engineering. IEEE, 543--546.
[36]
Theresia Ratih Dewi Saputri, Adil Mehmood Khan, and Seok-Won Lee. 2014. User-independent activity recognition via three-stage GA-based feature selection. International Journal of Distributed Sensor Networks 10, 3 (2014), 706287.
[37]
SBHAR dataset. Available online: http://archive.ics.uci.edu/ml/datasets/Smartphone-Based+Recognition+of+Human+Activities+and+Postural+Transitions
[38]
Pekka Siirtola and Juha Röning. 2012. Recognizing human activities user-independently on smartphones based on accelerometer data. International Journal of Interactive Multimedia and Artificial Intelligence 1, 5 (2012), 38--45.
[39]
Sebastijan Sprager and Damjan Zazula. 2009. A cumulant-based method for gait identification using accelerometer data with principal component analysis and support vector machine. WSEAS Transactions on Signal Processing 5, 11 (2009), 369--378.
[40]
Xu Sun, Hisashi Kashima, Ryota Tomioka, Naonori Ueda, and Ping Li. 2011. A new multi-task learning method for personalized activity recognition. In Proceedings of the 11th IEEE International Conference on Data Mining. IEEE, 1218--1223.
[41]
Xu Sun, Hisashi Kashima, and Naonori Ueda. 2013. Large-scale personalized human activity recognition using online multitask learning. IEEE Transactions on Knowledge and Data Engineering 25, 11 (2013), 2551--2563.
[42]
Cunchao Tu, Han Liu, Zhiyuan Liu, and Maosong Sun. 2017. Cane: Context-aware network embedding for relation modeling. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. ACL, 1722--1731.
[43]
UniMiB dataset. Available online: https://www.sal.disco.unimib.it/technologies/unimib-shar/
[44]
Yunus Emre Ustev, Ozlem Durmaz Incel, and Cem Ersoy. 2013. User, device and orientation independent human activity recognition on mobile phones: Challenges and a proposal. In Proceedings of the 15th ACM Conference on Pervasive and Ubiquitous Computing Adjunct Publication. ACM, 1427--1436.
[45]
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems. Curran Associates Inc., 6000--6010.
[46]
Jian Bo Yang, Minh Nhut Nguyen, Phyo Phyo San, Xiao Li Li, and Shonali Krishnaswamy. 2015. Deep convolutional neural networks on multichannel time series for human activity recognition. In Proceedings of the 24th International Conference on Artificial Intelligence. AAAI, 3995--4001.
[47]
Yongxin Yang and Timothy M. Hospedales. 2017. Deep multi-task representation learning: A tensor factorisation approach. In Proceedings of the 5th International Conference on Learning Representations.
[48]
Shuochao Yao, Shaohan Hu, Yiran Zhao, Aston Zhang, and Tarek Abdelzaher. 2017. Deepsense: A unified deep learning framework for time-series mobile sensing data processing. In Proceedings of the 26th International Conference on World Wide Web. ACM, 351--360.
[49]
Zhanpeng Zhang, Ping Luo, Chen Change Loy, and Xiaoou Tang. 2014. Facial landmark detection by deep multi-task learning. In Proceedings of the 13th European Conference on Computer Vision. Springer, 94--108.
[50]
Zhongtang Zhao, Yiqiang Chen, Junfa Liu, Zhiqi Shen, and Mingjie Liu. 2011. Cross-people mobile-phone based activity recognition. In Proceedings of the 22nd International Joint Conference on Artificial Intelligence. AAAI, 2545--2550.

Cited By

View all
  • (2024)Temporal Action Localization for Inertial-based Human Activity RecognitionProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36997708:4(1-19)Online publication date: 21-Nov-2024
  • (2024)CrossHAR: Generalizing Cross-dataset Human Activity Recognition via Hierarchical Self-Supervised PretrainingProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36595978:2(1-26)Online publication date: 15-May-2024
  • (2024)Deep Heterogeneous Contrastive Hyper-Graph Learning for In-the-Wild Context-Aware Human Activity RecognitionProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36314447:4(1-23)Online publication date: 12-Jan-2024
  • Show More Cited By

Index Terms

  1. METIER: A Deep Multi-Task Learning Based Activity and User Recognition Model Using Wearable Sensors

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
    Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies  Volume 4, Issue 1
    March 2020
    1006 pages
    EISSN:2474-9567
    DOI:10.1145/3388993
    Issue’s Table of Contents
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 18 March 2020
    Published in IMWUT Volume 4, Issue 1

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Activity recognition
    2. multi-task learning
    3. mutual attention mechanism
    4. user recognition

    Qualifiers

    • Research-article
    • Research
    • Refereed

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)145
    • Downloads (Last 6 weeks)19
    Reflects downloads up to 21 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Temporal Action Localization for Inertial-based Human Activity RecognitionProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36997708:4(1-19)Online publication date: 21-Nov-2024
    • (2024)CrossHAR: Generalizing Cross-dataset Human Activity Recognition via Hierarchical Self-Supervised PretrainingProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36595978:2(1-26)Online publication date: 15-May-2024
    • (2024)Deep Heterogeneous Contrastive Hyper-Graph Learning for In-the-Wild Context-Aware Human Activity RecognitionProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36314447:4(1-23)Online publication date: 12-Jan-2024
    • (2024)Spatial-Temporal Masked Autoencoder for Multi-Device Wearable Human Activity RecognitionProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36314157:4(1-25)Online publication date: 12-Jan-2024
    • (2024)A Handwriting Recognition System With WiFiIEEE Transactions on Mobile Computing10.1109/TMC.2023.327960823:4(3391-3409)Online publication date: Apr-2024
    • (2024)Motion Pattern Recognition for Indoor Pedestrian Altitude Estimation Based on Inertial SensorIEEE Sensors Journal10.1109/JSEN.2024.335516324:6(8197-8209)Online publication date: 15-Mar-2024
    • (2024)RAPNet: Resolution-Adaptive and Predictive Early Exit Network for Efficient Image RecognitionIEEE Internet of Things Journal10.1109/JIOT.2024.342855411:20(33492-33507)Online publication date: 15-Oct-2024
    • (2024)Contrastive Sensor Excitation for Generalizable Cross-Person Activity Recognition2024 International Joint Conference on Neural Networks (IJCNN)10.1109/IJCNN60899.2024.10650018(1-8)Online publication date: 30-Jun-2024
    • (2024)Resource-constrained edge-based deep learning for real-time person-identification using foot-padEngineering Applications of Artificial Intelligence10.1016/j.engappai.2024.109290138(109290)Online publication date: Dec-2024
    • (2024)Advancements in artificial intelligence for biometricsEngineering Applications of Artificial Intelligence10.1016/j.engappai.2023.107712130:COnline publication date: 1-Apr-2024
    • Show More Cited By

    View Options

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media