Nothing Special   »   [go: up one dir, main page]

Skip to main content

STCN-GR: Spatial-Temporal Convolutional Networks for Surface-Electromyography-Based Gesture Recognition

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2021)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 13110))

Included in the following conference series:

Abstract

Gesture recognition using surface electromyography (sEMG) is the technical core of muscle-computer interface (MCI) in human-computer interaction (HCI), which aims to classify gestures according to signals obtained from human hands. Since sEMG signals are characterized by spatial relevancy and temporal nonstationarity, sEMG-based gesture recognition is a challenging task. Previous works attempt to model this structured information and extract spatial and temporal features, but the results are not satisfactory. To tackle this problem, we proposed spatial-temporal convolutional networks for sEMG-based gesture recognition (STCN-GR). In this paper, the concept of the sEMG graph is first proposed by us to represent sEMG data instead of image and vector sequence adopted by previous works, which provides a new perspective for the research of sEMG-based tasks, not just gesture recognition. Graph convolutional networks (GCNs) and temporal convolutional networks (TCNs) are used in STCN-GR to capture spatial-temporal information. Additionally, the connectivity of the graph can be adjusted adaptively in different layers of networks, which increases the flexibility of networks compared with the fixed graph structure used by original GCNs. On two high-density sEMG (HD-sEMG) datasets and a sparse armband dataset, STCN-GR outperforms previous works and achieves the state-of-the-art, which shows superior performance and powerful generalization ability.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Du, Y., Jin, W., Wei, W., Hu, Y., Geng, W.: Surface EMG-based inter-session gesture recognition enhanced by deep domain adaptation. Sensors 17(3), 458 (2017)

    Article  Google Scholar 

  2. Du, Y., et al.: Semi-supervised learning for surface EMG-based gesture recognition. In: IJCAI, pp. 1624–1630 (2017)

    Google Scholar 

  3. Fan, Y., Yin, Y.: Active and progressive exoskeleton rehabilitation using multisource information fusion from EMG and force-position EPP. IEEE Trans. Biomed. Eng. 60(12), 3314–3321 (2013)

    Article  Google Scholar 

  4. Geng, W., Du, Y., Jin, W., Wei, W., Hu, Y., Li, J.: Gesture recognition by instantaneous surface EMG images. Sci. Rep. 6(1), 1–8 (2016)

    Article  Google Scholar 

  5. Hao, S., Wang, R., Wang, Y., Li, Y.: A spatial attention based convolutional neural network for gesture recognition with HD-sEMG signals. In: 2020 IEEE International Conference on E-health Networking, Application & Services (HEALTHCOM), pp. 1–6. IEEE (2021)

    Google Scholar 

  6. Hu, Y., Wong, Y., Wei, W., Du, Y., Kankanhalli, M., Geng, W.: A novel attention-based hybrid CNN-RNN architecture for sEMG-based gesture recognition. PloS one 13(10), e0206049 (2018)

    Article  Google Scholar 

  7. Ketykó, I., Kovács, F., Varga, K.Z.: Domain adaptation for sEMG-based gesture recognition with recurrent neural networks. In: 2019 International Joint Conference on Neural Networks (IJCNN), pp. 1–7. IEEE (2019)

    Google Scholar 

  8. Koch, P., Brügge, N., Phan, H., Maass, M., Mertins, A.: Forked recurrent neural network for hand gesture classification using inertial measurement data. In: ICASSP 2019–2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2877–2881. IEEE (2019)

    Google Scholar 

  9. Monti, F., Bronstein, M.M., Bresson, X.: Deep geometric matrix completion: a new way for recommender systems. In: 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6852–6856. IEEE (2018)

    Google Scholar 

  10. Muri, F., Carbajal, C., Echenique, A.M., Fernández, H., López, N.M.: Virtual reality upper limb model controlled by EMG signals. In: Journal of Physics: Conference Series, vol. 477, p. 012041. IOP Publishing (2013)

    Google Scholar 

  11. Rahimian, E., Zabihi, S., Atashzar, S.F., Asif, A., Mohammadi, A.: Xceptiontime: independent time-window xceptiontime architecture for hand gesture classification. In: ICASSP 2020–2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1304–1308. IEEE (2020)

    Google Scholar 

  12. Shi, L., Zhang, Y., Cheng, J., Lu, H.: Two-stream adaptive graph convolutional networks for skeleton-based action recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12026–12035 (2019)

    Google Scholar 

  13. Tsinganos, P., Cornelis, B., Cornelis, J., Jansen, B., Skodras, A.: Improved gesture recognition based on sEMG signals and TCN. In: ICASSP 2019–2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1169–1173. IEEE (2019)

    Google Scholar 

  14. Turk, M.: Perceptual user interfaces. In: Earnshaw, R.A., Guedj, R.A., Dam, A., Vince, J.A. (eds.) Frontiers of Human-Centered Computing, Online Communities and Virtual Environments, pp. 39–51. Springer, Heidelberg (2001). https://doi.org/10.1007/978-1-4471-0259-5_4

    Chapter  Google Scholar 

  15. Valsesia, D., Fracastoro, G., Magli, E.: Learning localized generative models for 3D point clouds via graph convolution. In: International Conference on Learning Representations (2018)

    Google Scholar 

  16. Wei, W., Dai, Q., Wong, Y., Hu, Y., Kankanhalli, M., Geng, W.: Surface-electromyography-based gesture recognition by multi-view deep learning. IEEE Trans. Biomed. Eng. 66(10), 2964–2973 (2019)

    Article  Google Scholar 

  17. Wei, W., Wong, Y., Du, Y., Hu, Y., Kankanhalli, M., Geng, W.: A multi-stream convolutional neural network for sEMG-based gesture recognition in muscle-computer interface. Pattern Recogn. Lett. 119, 131–138 (2019)

    Article  Google Scholar 

  18. Yan, S., Xiong, Y., Lin, D.: Spatial temporal graph convolutional networks for skeleton-based action recognition. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32 (2018)

    Google Scholar 

  19. Yao, L., Mao, C., Luo, Y.: Graph convolutional networks for text classification. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 7370–7377 (2019)

    Google Scholar 

  20. Zhang, Y., Chen, Y., Yu, H., Yang, X., Lu, W.: Learning effective spatial-temporal features for sEMG armband-based gesture recognition. IEEE Internet Things J. 7(8), 6979–6992 (2020)

    Article  Google Scholar 

Download references

Acknowledgments

This research is supported by the National Natural Science Foundation of China, grant no. 61904038 and no. U1913216; National Key R&D Program of China, grant no. 2021YFC0122702 and no. 2018YFC1705800; Shanghai Sailing Program, grant no. 19YF1403600; Shanghai Municipal Science and Technology Commission, grant no. 19441907600, no.19441908200, and no. 19511132000; Opening Project of Zhejiang Lab, grant no. 2021MC0AB01; Fudan University-CIOMP Joint Fund, grant no.FC2019-002; Opening Project of Shanghai Robot R&D and Transformation Functional Platform, grant no. KEH2310024; Ji Hua Laboratory, grant no. X190021TB190; Shanghai Municipal Science and Technology Major Project, grant no. 2021SHZDZX0103 and no. 2018SHZDZX01; ZJ Lab, and Shanghai Center for Brain Science and Brain-Inspired Technology.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Xiaoyang Kang or Hongbo Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Lai, Z. et al. (2021). STCN-GR: Spatial-Temporal Convolutional Networks for Surface-Electromyography-Based Gesture Recognition. In: Mantoro, T., Lee, M., Ayu, M.A., Wong, K.W., Hidayanto, A.N. (eds) Neural Information Processing. ICONIP 2021. Lecture Notes in Computer Science(), vol 13110. Springer, Cham. https://doi.org/10.1007/978-3-030-92238-2_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-92238-2_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-92237-5

  • Online ISBN: 978-3-030-92238-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics