Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

Development of a Framework for Human–Robot interactions with Indian Sign Language Using Possibility Theory

  • Published:
International Journal of Social Robotics Aims and scope Submit manuscript

Abstract

This paper demonstrates the capability of NAO humanoid robot to interact with hearing impaired persons using Indian Sign Language (ISL). Principal contributions of the paper are: wavelet descriptor has been applied to extracting moment invariant shape future of hand gestures and possibility theory (PT) has been used for classification of gestures. Preprocessing and extraction of overlapping frames (start and end point of each gesture) are the other major events which have been solved using background modeling and novel gradient method. We have shown that the overlapping frames are helpful for fragmenting a continuous ISL gesture into isolated gestures. These isolated gestures are further processed and classified. During the segmentation process some of the geometrical features like shape and orientation of hand are deformed, which has been overcome by extracting a new moment invariant feature through wavelet descriptor. These features are then combined with the other two features (orientation and speed) and are classified using PT. Here we use PT in place of probability theory because possibility deals with the problem of uncertainty and impression whereas probability handles only uncertainty issues. Experiments have been performed on 20 sentences of continuous ISL gestures having 4000 samples where each sentence having 20 instances. In this dataset 50% samples have been used for training and 50% have been used for testing. From analysis of results we found that the proposed approach gives 92% classification accuracy with 20 subjects on continuous ISL gestures. This result has been compared with the results obtained with other classifiers like Hidden Markov Model and KNN and found 10% increment in the classification rate with proposed approach. These classified gestures are then combined for generating a sentence in the text format which is being matched with the knowledge database of NAO robot which has also increased the classification accuracy. These sentences are further converted in the form of speech or gesture using NAO robot.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Padmanabhan V, Sornalatha M (2014) Hand gesture recognition and voice conversion system for dumb people. Int J Sci Eng Res 5(5):427

    Google Scholar 

  2. Tripathi K, Baranwal N, Nandi GC (2015) Continuous dynamic Indian Sign Language gesture recognition with invariant backgrounds. In: 2015 International conference on advances in computing, communications and informatics (ICACCI). IEEE

  3. Baranwal N, Nandi GC (2016) An efficient gesture based humanoid learning using wavelet descriptor and MFCC techniques. Int J Mach Learn Cybern. doi:10.1007/s13042-016-0512-4

  4. http://doc.aldebaran.com/1-14/dev/matlab/index.html

  5. Baranwal N, Tripathi S, Nandi GC (2014) A speaker invariant speech recognition technique using HFCC features in isolated Hindi words. Int J Comput Intell Stud 3(4):277–291

    Article  Google Scholar 

  6. Bertel LB, Rasmussen DM (2013) On being a peer: what persuasive technology for teaching can gain from social robotics in education. Int J Concept Struct Smart Appl (IJCSSA) 1(2):58–68

    Google Scholar 

  7. Shamsuddin S et al (2012) Initial response in HRI—a case study on evaluation of child with autism spectrum disorders interacting with a humanoid robot Nao. Procedia Eng 41:1448–1455

    Article  Google Scholar 

  8. Shamsuddin S, Yussof H, Ismail L, Hanapiah FA, Mohamed S, Piah HA Ismarrubie Zahari N (2012) Initial response of autistic children in human–robot interaction therapy with humanoid robot NAO. In: 2012 IEEE 8th international colloquium on signal processing and its applications (CSPA), pp 188–193, 23–25 March 2012

  9. Singh AK, Nandi GC (2016) NAO humanoid robot: analysis of calibration techniques for robot sketch drawing. Rob Auton Syst 79:108–121

    Article  Google Scholar 

  10. Bhuyan MK, Chaitanya N, Sharath CD (2011) Hand gesture animation by key frame extraction. In: Proceedings of IEEE international conference on image information (ICIIP -2011), India, pp 1–6

  11. Bhuyan MK, Kar MK, Neog DR (2011) Hand pose identification from monocular image for sign language recognition. In: Proceedings of IEEE international conference on signal and image processing applications (ICSIPA 2011), Malaysia, pp 378–383

  12. Bhuyan MK, Ajay Kumar D, MacDorman KF, Iwahori Y (2014) A novel set of features for continuous hand gesture recognition. J Multimodal User Interfaces 8(4):333–343

    Article  Google Scholar 

  13. Liang R, Ouhyoung M (1998) A real-time continuous gesture recognition system for sign language. In: IEEE international conference on automatic face and gesture recognition, Japan, pp 558–567

  14. Bauer B, Hienz H (2000) Relevant features for video-based continuous sign language recognition. In: Fourth IEEE international conference on automatic face and gesture recognition, 2000. Proceedings, pp 440–445, 28–30 Mar 2000. doi:10.1109/AFGR.2000.840672

  15. Bauer B, Hienz H, Kraiss K-F (2000) Video-based continuous sign language recognition using statistical methods. In: 15th International conference on pattern recognition, 2000. Proceedings, vol 2, pp 463–466, 03–07 Sept 2000. doi:10.1109/ICPR.2000.906112

  16. Chen RJ, Kehtarnavaz N (2015) Improving human action recognition using fusion of depth camera and inertial sensors. IEEE Trans Hum Mach Syst 45(1):51–61

    Article  Google Scholar 

  17. Ju Z et al (2016) An integrative framework for human hand gesture segmentation in RGB-D data. IEEE Syst J

  18. Ju Z, Liu H (2011) A unified fuzzy framework for human-hand motion recognition. IEEE Trans Fuzzy Syst 19(5):901–913

    Article  Google Scholar 

  19. Ju Z, Ouyang G, Wilamowska-Korsak M et al (2013) Surface EMG based hand manipulation identification via nonlinear feature extraction and classification. IEEE Sens J 13(9):3302–3311

    Article  Google Scholar 

  20. Ju Z, Ji X, Li J et al (2015) An integrative framework of human hand gesture segmentation for human–robot interaction. IEEE Syst J

  21. Zuher F, Romero R (2012) Recognition of human motions for imitation and control of a humanoid robot. In: Robotics symposium and Latin American robotics symposium (SBR–LARS), (2012) Brazilian. IEEE, pp. 190–195

  22. Mazumdar D, Talukdar AK, Sarma KK (2013) Gloved and free hand tracking based hand gesture recognition. In: 2013 1st International conference on IEEE emerging trends and applications in computer science (ICETACS), pp 197–202, 13–14 Sept 2013

  23. Mitra S, Acharya T (2007) Gesture recognition: a survey. IEEE Trans Syst Man Cybern Part C (Appl Rev) 37(3):311–324

    Article  Google Scholar 

  24. Kim J-B, Park K-H, Bang W-C, Bie ZZ (2002) Continuous gesture recognition system for Korean sign language based on fuzzy logic and hidden Markov model. In: Proceedings of the 2002 IEEE international conference on fuzzy systems, 2002. FUZZ-IEEE’02, vol 2, pp 1574–1579

  25. Monceaux J, Becker J, Boudier C, Mazel A (2009) Demonstration: first steps in emotional expression of the humanoid robot Nao. In: International conference on multimodal interaction, pp 235–236

  26. Tomkins W (1926-1936) Universal Indian sign language

  27. Shen D, Ip HHS (1999) Discriminative wavelet shape descriptors for recognition of 2-D patterns. Pattern Recogn 32(2):151–165

    Article  Google Scholar 

  28. Shen D, Ip HHS (1999) Discriminative wavelet shape descriptors for recognition of 2-D patterns. Pattern Recognit 32(2):151–165. doi:10.1016/S0031-3203(98)00137-X. ISSN 0031-3203

  29. Flusser J, Zitova B, Suk T (2009) Moments and moment invariants in pattern recognition. Wiley, New York. ISBN: 978-0-470-69987-4

  30. Nabout AA (2013) Object shape recognition using wavelet descriptors. J Eng 2013:435628. doi:10.1155/2013/435628

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Neha Baranwal.

Additional information

Research supported by ABC Foundation.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Baranwal, N., Singh, A.K. & Nandi, G.C. Development of a Framework for Human–Robot interactions with Indian Sign Language Using Possibility Theory. Int J of Soc Robotics 9, 563–574 (2017). https://doi.org/10.1007/s12369-017-0412-0

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12369-017-0412-0

Keywords

Navigation