Abstract
With the rapid growth of Virtual Reality applications, there is a significant need to bridge the gap between the real world and the virtual environment in which humans are immersed. Activity recognition will be an important factor in delivering models of human actions and operations into the virtual environments. In this paper, we define an activity as being composed of atomic gestures and intents. With this approach, the proposed algorithm detects predefined activities utilizing the fusion of multiple sensors. First, data is collected from both vision and wearable sensors to train Recurrent Neural Networks (RNN) for the detection of atomic gestures. Then, sequences of the gestures, as observable states, are labeled with their associated intents. These intents denote hidden states, and the sequences are used to train and test Hidden Markov Models (HMM). Each HMM is representative of a single activity. Upon testing, the proposed gesture recognition system achieves around 90% average accuracy with 95% mean confidence. The overall activity recognition performs at an average of 89% accuracy for simple and complex activities.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Ajili, I., Mallem, M., Didier, J.Y.: Gesture recognition for humanoid robot teleoperation. In: 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (2017)
Bates, T., Ramirez-Amaro, K., Inamura, T., Cheng, G.: On-line simultaneous learning and recognition of everyday activities from virtual reality performances. In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2017)
Bianchi, F.M.: Recurrent neural networks: A quick overview. University of Tromsø (2017)
Blunsom, P.: Hidden markov models. Utah State University (2004)
Bounds, M., Wilson, B., Tavakkoli, A., Loffredo, D.: An integrated cyber-physical immersive virtual reality framework with applications to telerobotics. In: Bebis, G., et al. (eds.) ISVC 2016. LNCS, vol. 10073, pp. 235–245. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-50832-0_23
Hauser, K.: Recognition, prediction, and planning for assisted teleoperation of freeform tasks. Auton. Robot. 35(4), 241–254 (2013)
Kelley, R., King, C., Tavakkoli, A., Nicolescu, M., Nicolescu, M., Bebis, G.: An architecture for understanding intent using a novel hidden markov formulation. Int. J. Hum. Robot. (2008)
Liu, H., Wang, L.: Gesture recognition for human-robot collaboration: A review (2017)
Liu, O., Rakita, D., Mutlu, B., Gleicher, M.: Understanding human-robot interaction in virtual reality. In: 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 751–757, August 2017. https://doi.org/10.1109/ROMAN.2017.8172387
Lu, D., Yu, Y., Liu, H.: Gesture recognition using data glove: An extreme learning machine method. In: Proceedings of the 2016 IEEE International Conference on Robotics and Biomimetics (2016)
Luzhnica, G., Simon, J., Lex, E., Pammer, V.: A sliding window approach to natural hand gesture recognition using a custom data glove. In: IEEE Symposium on 3D User Interfaces (2016)
Rabiner, L.: A tutorial on hidden Markov models and selected applications in speech recognition. Proc. IEEE 77(2) (1989)
Regenbrecht, J., Tavakkoli, A., Loffredo, D.: A robust and intuitive 3D interface for teleoperation of autonomous robotic agents through immersive virtual reality environments. In: 2017 IEEE Symposium on 3D User Interfaces (3DUI), pp. 199–200, March 2017. https://doi.org/10.1109/3DUI.2017.7893340
Sharma, T., Kumar, S., Yadav, N., Sharma, K., Bhardwaj, P.: Air-swipe gesture recognition using opencv in android devices. In: 2017 International Conference on Algorithms, Methodology, Models and Applications in Emerging Technologies (ICAMMAET) (2017)
Sheridan, T.B.: Human-robot interaction: Status and challenges. Hum. Fac. 58(4), 525–532 (2016). https://doi.org/10.1177/0018720816644364. pMID: 27098262
Wang, L., Gu, T., Chen, H., Tao, X., Lu, J.: Real-time activity recognition in wireless body sensor networks: from simple gestures to complex activities. In: 2010 IEEE 16th International Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA) (2010)
Wilson, B., et al.: VETO: An immersive virtual environment for tele-operation. Robotics 7(2) (2018). https://doi.org/10.3390/robotics7020026, http://www.mdpi.com/2218-6581/7/2/26
Wolf, M.T., Vernacchia, C.A.M.T., Fromm, J., Jethani, H.L.: Gesture-based robot control with variable autonomy from the JPL biosleeve. In: 2013 IEEE International Conference on Robotics and Automation (ICRA) (2013)
Acknowledgments
This material is based upon work supported in part by the U. S. Army Research Laboratory and the U. S. Department of Defense under grant number W911NF-15-1-0024, W911NF-15-1-0455, and W911NF-16-1-0473. This support does not necessarily imply endorsement by the DoD or ARL.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Simmons, S., Clark, K., Tavakkoli, A., Loffredo, D. (2018). Sensory Fusion and Intent Recognition for Accurate Gesture Recognition in Virtual Environments. In: Bebis, G., et al. Advances in Visual Computing. ISVC 2018. Lecture Notes in Computer Science(), vol 11241. Springer, Cham. https://doi.org/10.1007/978-3-030-03801-4_22
Download citation
DOI: https://doi.org/10.1007/978-3-030-03801-4_22
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-03800-7
Online ISBN: 978-3-030-03801-4
eBook Packages: Computer ScienceComputer Science (R0)