Nothing Special   »   [go: up one dir, main page]

Skip to main content

A Method for Analyzing Spatial Relationships Between Words in Sign Language Recognition

  • Conference paper
  • First Online:
Gesture-Based Communication in Human-Computer Interaction (GW 1999)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1739))

Included in the following conference series:

Abstract

There are expressions usingspatial relationships in sign language that are called directional verbs. To understand a sign-language sentence that includes a directional verb, it is necessary to analyze the spatial relationship between the recognized sign-language words and to find the proper combination of a directional verb and the sign-language words related to it. In this paper, we propose an analysis method for evaluatingthe spatial relationship between a directional verb and other sign-language words according to the distribution of the parameters representingthe spatial relationship.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Waldron, M. B., Kim, S.: Isolated ASL Sign Recognition System for Deaf Persons. IEEE Transactions on Rehabilitation Engineering, Vol. 3, No. 3 (1995) 261–271

    Article  Google Scholar 

  2. Murakami, K., Taguchi, H.: Gesture Recognition Using Recurrent Neural Networks. Proceedings of the CHI’91 (1991) 237–242

    Google Scholar 

  3. Kadous, M.W.: Machine Recognition of Auslan Signs Using PowerGloves: Towards Large-Lexicon Recognition of Sign Language. Proceedings of the Workshop on the Integration of Gesture in Language and Speech (1996) 165–174

    Google Scholar 

  4. Fujii, M., Iwanami, T., Kamei, S., Nagashima, Y.: Recognition of Global Articulator of Signs Based on the Spatio-Temporal Image. Proceedings of the Eleventh Symposium on Human Interface (1995) 179–184 [in Japanese]

    Google Scholar 

  5. Teshima, T., Matsuo, H., Takata, Y. Hirakawa, M.: Virtual Recognition of Hand Gestures in Consideration of Distinguishing Right and Left Hands and Occlusion. Human Interface News and Report, Vol.10, No.4 (1995) 467–464 [in Japanese].

    Google Scholar 

  6. Matsuo, H., Igi, S., Lu, S., Nagashima, Y., Takata, Y., Teshima, T.: The Recognition Algorithm With Non-Contact For Japanese Sign Language Using Morphological Analysis. Gesture and Sign-Language in Human-Computer Interaction (1998)

    Google Scholar 

  7. Ohki, M.: The Sign Language Telephone. 7th World Telecommunication Forum, Vol.1 (1995) 391–395

    Google Scholar 

  8. Sagawa, H., Takeuchi, M., Ohki, M.: Methods to Describe and Recognize Sign Language Based on Gesture Components Represented by Symbols and Numerical Values. Knowledge-Based Systems, Vol.10, No.5 (1998) 287–294

    Article  Google Scholar 

  9. Starner, T., Weaver, J., Pentland, A.: Real-Time American Sign Language Recognition UsingDesk and Wearable Computer Based Video. IEEE Transaction on Pattern Analysis and Machine Intelligence, Vol. 20, No. 12 (1998) 1371–1375

    Article  Google Scholar 

  10. Vogler, C., Metaxas, D.: ASL Recognition Based on a Coupling Between HMMS and 3D Motion Analysis. Proceedings of the International Conference on Computer Vision (1998) 363–369

    Google Scholar 

  11. Liang, R.-H., Ouhyoung, M.: A Real-time Continuous Gesture Recognition System for the Taiwanese Sign Language. Proceedings of The Third IEEE International Conference on Automatic Face and Gesture Recognition (1998) 558–565

    Google Scholar 

  12. Kanda, K.: Lecture on Sign Language Study. Fukumura Publisher (1994) [in Japanese]

    Google Scholar 

  13. Braffort, A.: ARGo: An Architecture for Sign Language Recognition and Interpretation. Progress in Gestural Interaction (1997) 17–30

    Google Scholar 

  14. Sagawa, H., Takeuchi, M.: A Study on Spatial Information for Sign Language Recognition. Proceedings of the 13th Symposium on Human Interface (1997) 231–236 [in Japanese]

    Google Scholar 

  15. Tanaka, H., Tsujii, J.: Natural Language Understanding. Ohmsha, Ltd. (1988) [in Japanese]

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1999 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Sagawa, H., Takeuchi, M. (1999). A Method for Analyzing Spatial Relationships Between Words in Sign Language Recognition. In: Braffort, A., Gherbi, R., Gibet, S., Teil, D., Richardson, J. (eds) Gesture-Based Communication in Human-Computer Interaction. GW 1999. Lecture Notes in Computer Science(), vol 1739. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-46616-9_18

Download citation

  • DOI: https://doi.org/10.1007/3-540-46616-9_18

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-66935-7

  • Online ISBN: 978-3-540-46616-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics