Nothing Special   »   [go: up one dir, main page]

Skip to main content

Sensory Fusion and Intent Recognition for Accurate Gesture Recognition in Virtual Environments

  • Conference paper
  • First Online:
Advances in Visual Computing (ISVC 2018)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 11241))

Included in the following conference series:

Abstract

With the rapid growth of Virtual Reality applications, there is a significant need to bridge the gap between the real world and the virtual environment in which humans are immersed. Activity recognition will be an important factor in delivering models of human actions and operations into the virtual environments. In this paper, we define an activity as being composed of atomic gestures and intents. With this approach, the proposed algorithm detects predefined activities utilizing the fusion of multiple sensors. First, data is collected from both vision and wearable sensors to train Recurrent Neural Networks (RNN) for the detection of atomic gestures. Then, sequences of the gestures, as observable states, are labeled with their associated intents. These intents denote hidden states, and the sequences are used to train and test Hidden Markov Models (HMM). Each HMM is representative of a single activity. Upon testing, the proposed gesture recognition system achieves around 90% average accuracy with 95% mean confidence. The overall activity recognition performs at an average of 89% accuracy for simple and complex activities.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Ajili, I., Mallem, M., Didier, J.Y.: Gesture recognition for humanoid robot teleoperation. In: 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (2017)

    Google Scholar 

  2. Bates, T., Ramirez-Amaro, K., Inamura, T., Cheng, G.: On-line simultaneous learning and recognition of everyday activities from virtual reality performances. In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2017)

    Google Scholar 

  3. Bianchi, F.M.: Recurrent neural networks: A quick overview. University of Tromsø (2017)

    Google Scholar 

  4. Blunsom, P.: Hidden markov models. Utah State University (2004)

    Google Scholar 

  5. Bounds, M., Wilson, B., Tavakkoli, A., Loffredo, D.: An integrated cyber-physical immersive virtual reality framework with applications to telerobotics. In: Bebis, G., et al. (eds.) ISVC 2016. LNCS, vol. 10073, pp. 235–245. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-50832-0_23

    Chapter  Google Scholar 

  6. Hauser, K.: Recognition, prediction, and planning for assisted teleoperation of freeform tasks. Auton. Robot. 35(4), 241–254 (2013)

    Article  MathSciNet  Google Scholar 

  7. Kelley, R., King, C., Tavakkoli, A., Nicolescu, M., Nicolescu, M., Bebis, G.: An architecture for understanding intent using a novel hidden markov formulation. Int. J. Hum. Robot. (2008)

    Google Scholar 

  8. Liu, H., Wang, L.: Gesture recognition for human-robot collaboration: A review (2017)

    Google Scholar 

  9. Liu, O., Rakita, D., Mutlu, B., Gleicher, M.: Understanding human-robot interaction in virtual reality. In: 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 751–757, August 2017. https://doi.org/10.1109/ROMAN.2017.8172387

  10. Lu, D., Yu, Y., Liu, H.: Gesture recognition using data glove: An extreme learning machine method. In: Proceedings of the 2016 IEEE International Conference on Robotics and Biomimetics (2016)

    Google Scholar 

  11. Luzhnica, G., Simon, J., Lex, E., Pammer, V.: A sliding window approach to natural hand gesture recognition using a custom data glove. In: IEEE Symposium on 3D User Interfaces (2016)

    Google Scholar 

  12. Rabiner, L.: A tutorial on hidden Markov models and selected applications in speech recognition. Proc. IEEE 77(2) (1989)

    Article  Google Scholar 

  13. Regenbrecht, J., Tavakkoli, A., Loffredo, D.: A robust and intuitive 3D interface for teleoperation of autonomous robotic agents through immersive virtual reality environments. In: 2017 IEEE Symposium on 3D User Interfaces (3DUI), pp. 199–200, March 2017. https://doi.org/10.1109/3DUI.2017.7893340

  14. Sharma, T., Kumar, S., Yadav, N., Sharma, K., Bhardwaj, P.: Air-swipe gesture recognition using opencv in android devices. In: 2017 International Conference on Algorithms, Methodology, Models and Applications in Emerging Technologies (ICAMMAET) (2017)

    Google Scholar 

  15. Sheridan, T.B.: Human-robot interaction: Status and challenges. Hum. Fac. 58(4), 525–532 (2016). https://doi.org/10.1177/0018720816644364. pMID: 27098262

    Article  Google Scholar 

  16. Wang, L., Gu, T., Chen, H., Tao, X., Lu, J.: Real-time activity recognition in wireless body sensor networks: from simple gestures to complex activities. In: 2010 IEEE 16th International Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA) (2010)

    Google Scholar 

  17. Wilson, B., et al.: VETO: An immersive virtual environment for tele-operation. Robotics 7(2) (2018). https://doi.org/10.3390/robotics7020026, http://www.mdpi.com/2218-6581/7/2/26

    Article  Google Scholar 

  18. Wolf, M.T., Vernacchia, C.A.M.T., Fromm, J., Jethani, H.L.: Gesture-based robot control with variable autonomy from the JPL biosleeve. In: 2013 IEEE International Conference on Robotics and Automation (ICRA) (2013)

    Google Scholar 

Download references

Acknowledgments

This material is based upon work supported in part by the U. S. Army Research Laboratory and the U. S. Department of Defense under grant number W911NF-15-1-0024, W911NF-15-1-0455, and W911NF-16-1-0473. This support does not necessarily imply endorsement by the DoD or ARL.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alireza Tavakkoli .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Simmons, S., Clark, K., Tavakkoli, A., Loffredo, D. (2018). Sensory Fusion and Intent Recognition for Accurate Gesture Recognition in Virtual Environments. In: Bebis, G., et al. Advances in Visual Computing. ISVC 2018. Lecture Notes in Computer Science(), vol 11241. Springer, Cham. https://doi.org/10.1007/978-3-030-03801-4_22

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-03801-4_22

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-03800-7

  • Online ISBN: 978-3-030-03801-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics