Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article
Public Access

Teaching American Sign Language in Mixed Reality

Published: 18 December 2020 Publication History

Abstract

This paper presents a holistic system to scale up the teaching and learning of vocabulary words of American Sign Language (ASL). The system leverages the most recent mixed-reality technology to allow the user to perceive her own hands in an immersive learning environment with first- and third-person views for motion demonstration and practice. Precise motion sensing is used to record and evaluate motion, providing real-time feedback tailored to the specific learner. As part of this evaluation, learner motions are matched to features derived from the Hamburg Notation System (HNS) developed by sign-language linguists. We develop a prototype to evaluate the efficacy of mixed-reality-based interactive motion teaching. Results with 60 participants show a statistically significant improvement in learning ASL signs when using our system, in comparison to traditional desktop-based, non-interactive learning. We expect this approach to ultimately allow teaching and guided practice of thousands of signs.

Supplementary Material

shao (shao.zip)
Supplemental movie, appendix, image and software files for, Teaching American Sign Language in Mixed Reality

References

[1]
2019. Lifeprint Cirriculum | ASL 101. https://www.lifeprint.com/asl101/curriculum/curriculum.htm. (2019). Online; accessed 3 November 2019.
[2]
N. S. Altman. 1992. An Introduction to Kernel and Nearest-Neighbor Nonparametric Regression. The American Statistician 46, 3 (1992), 175--185. http://www.jstor.org/stable/2685209
[3]
Fraser Anderson, Tovi Grossman, Justin Matejka, and George Fitzmaurice. 2013. YouMove: Enhancing movement training with an augmented reality mirror. UIST 2013 - Proc. of the 26th Annual ACM Symposium on User Interface Software and Technology, 311--320. https://doi.org/10.1145/2501988.2502045
[4]
Autodesk, INC. 2018. Motionbuilder. https:/autodesk.com/maya. (2018). Online; accessed 3 November 2019.
[5]
Autodesk, INC. 2019. Maya. https:/autodesk.com/maya. (2019). Online; accessed 3 November 2019.
[6]
J. C. Becker and N. V. Thakor. 1988. A study of the range of motion of human fingers with application to anthropomorphic designs. IEEE Transactions on Biomedical Engineering 35, 2 (Feb 1988), 110--117. https://doi.org/10.1109/10.1348
[7]
Heike Brock, Felix Law, Kazuhiro Nakadai, and Yuji Nagashima. 2020. Learning Three-Dimensional Skeleton Data from Sign Language Video. ACM Trans. Intell. Syst. Technol. 11, 3, Article 30 (April 2020), 24 pages. https://doi.org/10.1145/3377552
[8]
Teak-Wei Chong and Boon-Giin Lee. 2018. American Sign Language Recognition Using Leap Motion Controller with Machine Learning Approach. Sensors 18, 10 (2018). https://doi.org/10.3390/s18103554
[9]
C. Chuan, E. Regina, and C. Guardino. 2014. American Sign Language Recognition Using Leap Motion Sensor. In 2014 13th International Conference on Machine Learning and Applications. 541--544. https://doi.org/10.1109/ICMLA.2014.110
[10]
DGS-Korpus. 2019. Signing Gesture Markup Language (SiGML). https://www.sign-lang.uni-hamburg.de/hamnosys/input/. (2019). Online; accessed 3 November 2019.
[11]
Youchen Du, Shenglan Liu, Lin Feng, Menghui Chen, and Jie Wu. 2017. Hand Gesture Recognition with Leap Motion. CoRR abs/1711.04293 (2017). arXiv:1711.04293 http://arxiv.org/abs/1711.04293
[12]
M. Fiala. 2005. ARTag, a fiducial marker system using digital techniques. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05).
[13]
Mark Fiala. 2005. Artag fiducial marker system applied to vision based spacecraft docking. In Proc. Intl. Conf. Intelligent Robots and Systems (IROS) 2005 Workshop on Robot Vision for Space Applications.
[14]
Deen Freelon. 2013. ReCal OIR: Ordinal, Interval, and Ratio Intercoder Reliability as a Web Service. Int. J. Internet Sci. 8 (06 2013), 10--16.
[15]
Laura Freina and Michela Ott. 2015. A literature review on immersive virtual reality in education: state of the art and perspectives. In The International Scientific Conference eLearning and Software for Education.
[16]
Sergio Garrido-Jurado, Rafael Muñoz-Salinas, Francisco Madrid-Cuevas, and Manuel Marín-Jiménez. 2014. Automatic generation and detection of highly reliable fiducial markers under occlusion. Pattern Recognition 47 (06 2014), 2280--2292. https://doi.org/10.1016/j.patcog.2014.01.005
[17]
Thomas Hanke. 2019. HamNoSys 4Handshapes Chart. https://www.sign-lang.uni-hamburg.de/dgs-korpus/files/inhalt_pdf/HamNoSys_Handshapes.pdf. (2019). Online; accessed 3 November 2019.
[18]
Thomas Hanke. 2019. HamNoSys-Hamburg Notation System for Sign Languages. https://www.sign-lang.uni-hamburg.de/dgs-korpus/index.php/hamnosys-97.html. (2019). 2019-11-13 09:12:05 -0500.
[19]
Hanke, Thomas. 2004. HamNoSys---Representing sign language data in language resources and language processing contexts. In 4th International Conference on Language Resources and Evaluation(LREC).
[20]
Haskin Lab at Yale University. 2019. ASL Signbank. https://aslsignbank.haskins.yale.edu/about/copyright/. (2019). Online; accessed 3 November 2019.
[21]
David Heaney. 2019. HTC Vive Pro Is Getting Finger Tracking. (2019). https://uploadvr.com/htc-vive-finger-tracking/ Online; accessed 3 November 2019.
[22]
Jiahui Hou, Xiang-Yang Li, Peide Zhu, Zefan Wang, Yu Wang, Jianwei Qian, and Panglong Yang. 2019. SignSpeaker: A Real-time, High-Precision SmartWatch-based Sign Language Translator. In Proc. of MobiCom.
[23]
htc Inc. 2019. HTC VIVEPro. https://www.vive.com/us/product/vive-pro/. (2019). Online; accessed 3 November 2019.
[24]
Achiou Inc. 2019. Achiou Winter Knit Gloves. (2019). https://www.amazon.com/gp/product/B077MLRYNN/ref=ppx_yo_dt_b_asin_title_o02_s00?ie=UTF8&psc=1 Online; accessed 3 November 2019.
[25]
ARDUINO Inc. 2019. MKR1000. (2019). https://store.arduino.cc/usa/arduino-mkr1000 Online; accessed 3 November 2019.
[26]
FlexPoint Inc. 2019. Flex Sensors. (2019). https://shop.flexpoint.com/ Online; accessed 3 November 2019.
[27]
ValBox Inc. 2019. Cardboard Box. (2019). https://store.arduino.cc/usa/arduino-mkr1000 Online; accessed 3 November 2019.
[28]
Interlink Electronics Inc. 2019. Force Sensor Datasheet. (2019). https://cdn2.hubspot.net/hubfs/3899023/Interlinkelectronics%20November2017/Docs/Datasheet_FSR.pdf Online; accessed 3 November 2019.
[29]
Derek Kamper, T. George Hornby, and William Rymer. 2003. Extrinsic flexor muscles generate concurrent flexion of all three finger joints. Journal of biomechanics 35 (01 2003), 1581--9. https://doi.org/10.1016/S0021-9290(02)00229-4
[30]
Ratchadaporn Kanawong and Aniwat Kanwaratron. 2017. Human Motion Matching for Assisting Standard Thai Folk Dance Learning. 49--53. https://doi.org/10.5176/2251-1679_CGAT17.11
[31]
Bassem Khelil and Hamid Amiri. 2016. Hand Gesture Recognition Using Leap Motion Controller for Recognition of Arabic Sign Language.
[32]
Terry Koo and Mae Li. 2016. A Guideline of Selecting and Reporting Intraclass Correlation Coefficients for Reliability Research. Journal of Chiropractic Medicine 15 (03 2016). https://doi.org/10.1016/j.jcm.2016.02.012
[33]
klaus krippendorff. 2011. Computing Krippendorff's Alpha-Reliability. (01 2011).
[34]
S. J. Lederman, R. D. Howe, R. L. Klatzky, and C. Hamilton. 2004. Force variability during surface contact with bare finger or rigid probe. In 12th International Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, 2004. HAPTICS '04. Proceedings.
[35]
Hong Li, Wei Yang, Jianxin Wang, Yang Xu, and Liusheng Huang. 2016. WiFinger: Talk to Your Smart Devices with Finger-grained Gesture. In Proc. of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp '16). ACM, New York, NY, USA, 250--261. https://doi.org/10.1145/2971648.2971738
[36]
R. Likert. 1932. A Technique for the Measurement of Attitudes. Number nos. 136-165 in A Technique for the Measurement of Attitudes. publisher not identified. https://books.google.com/books?id=9rotAAAAYAAJ
[37]
Ultraleap Ltd. 2019. Leap Motion. (2019). https://www.leapmotion.com Online; accessed 3 November 2019.
[38]
Yongsen Ma, Gang Zhou, Shuangquan Wang, Hongyang Zhao, and Woosub Jung. 2018. SignFi: Sign Language Recognition Using WiFi. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2, 1, Article 23 (March 2018), 21 pages. https://doi.org/10.1145/3191755
[39]
Rajesh B. Mapari and Govind Kharat. 2016. American Static Signs Recognition Using Leap Motion Sensor. In Proc. of the Second International Conference on Information and Communication Technology for Competitive Strategies (ICTCS '16). ACM, New York, NY, USA, Article 67, 5 pages. https://doi.org/10.1145/2905055.2905125
[40]
G. Marin, F. Dominio, and P. Zanuttigh. 2014. Hand gesture recognition with leap motion and kinect devices. In 2014 IEEE International Conference on Image Processing (ICIP). 1565--1569. https://doi.org/10.1109/ICIP.2014.7025313
[41]
Stefan Marks, David White, and Manpreet Singh. 2017. Getting up your nose: a virtual reality education tool for nasal cavity anatomy. In SIGGRAPH Asia 2017 symposium on education.
[42]
Kenneth Mcgraw and S.P. Wong. 1996. Forming Inferences About Some Intraclass Correlation Coefficients. Psychological Methods 1 (03 1996), 30--46. https://doi.org/10.1037/1082-989X.1.1.30
[43]
Pedro Melgarejo, Xinyu Zhang, Parameswaran Ramanathan, and David Chu. 2014. Leveraging Directional Antenna Capabilities for Fine-grained Gesture Recognition. In Proc. of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp '14). ACM, New York, NY, USA, 541--551. https://doi.org/10.1145/2632048.2632095
[44]
Microsoft. 2019. Kinect. (2019). https://support.xbox.com/en-US/xbox-360/accessories/kinect-sensor-components Online; accessed 3 November 2019.
[45]
Motion Capture Manual. 2019. Vicon Blade. http://www.cs.uu.nl/docs/vakken/mcanim/mocap-manual/site/vicon-blade/. (2019). Online; accessed 3 November 2019.
[46]
Hawkar Oagaz, Anurag Sable, Min-Hyung Choi, Wenyao Xu, and Feng Lin. 2018. VRInsole: An unobtrusive and immersive mobility training system for stroke rehabilitation. In 2018 IEEE 15th International Conference on Wearable and Implantable Body Sensor Networks (BSN).
[47]
Pedregosa, Fabian and Varoquaux, Gaël and Gramfort, Alexandre and Michel, Vincent and Thirion, Bertrand and Grisel, Olivier and Blondel, Mathieu and Prettenhofer, Peter and Weiss, Ron and Dubourg, Vincent and Vanderplas, Jake and Passos, Alexandre and Cournapeau, David and Brucher, Matthieu and Perrot, Matthieu and Duchesnay, Édouard. 2011. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res. 12 (Nov. 2011), 2825--2830. http://dl.acm.org/citation.cfm?id=1953048.2078195
[48]
Wei Pei, Guanghua Xu, Min Li, Hui Ding, Sicong Zhang, and Ailing Luo. 2016. A motion rehabilitation self-training and evaluation system using Kinect. In 2016 13th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI).
[49]
Panayiotis E Pelargos, Daniel T Nagasawa, Carlito Lagman, Stephen Tenn, Joanna V Demos, Seung J Lee, Timothy T Bui, Natalie E Barnette, Nikhilesh S Bhatt, Nolan Ung, et al. 2017. Utilizing virtual and augmented reality for educational and clinical enhancements in neurosurgery. Journal of Clinical Neuroscience 35 (2017), 1--4.
[50]
PhySix Gear Sport Inc. 2019. Physix Gear Sport Waterproof Kinesiology Tape. https://www.amazon.com/gp/product/B017TH9X22/ref=ppx_yo_dt_b_asin_title_o05_s00?ie=UTF8&psc=1. (2019). Online; accessed 3 November 2019.
[51]
Panneer Selvam Santhalingam, Al Amin Hosain, Ding Zhang, Parth Pathak, Huzefa Rangwala, and Raja Kushalnagar. 2020. MmASL: Environment-Independent ASL Gesture Recognition Using 60 GHz Millimeter-Wave Signals. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 4, 1, Article 26 (March 2020), 30 pages. https://doi.org/10.1145/3381010
[52]
Jiacheng Shang and Jie Wu. 2017. A Robust Sign Language Recognition System with Multiple Wi-Fi Devices. In Proc. of the Workshop on Mobility in the Evolving Internet Architecture (MobiArch '17). ACM, New York, NY, USA, 19--24. https://doi.org/10.1145/3097620.3097624
[53]
Haryong Song, Wonsub Choi, and Haedong Kim. 2016. Robust vision-based relative-localization approach using an RGB-depth camera and LiDAR sensor fusion. IEEE Transactions on Industrial Electronics 63, 6 (2016), 3725--3736.
[54]
Stereolabs Inc. 2019. ZED Mini. https://www.stereolabs.com/zed-mini/. (2019). Online; accessed 3 November 2019.
[55]
Toshihiro Tagami, Toshihiro Kawase, Daisuke Morisaki, Ryoken Miyazaki, Tetsuro Miyazaki, Takahiro Kanno, and Kenji Kawashima. 2018. Development of Master-slave Type Lower Limb Motion Teaching System. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
[56]
I. H. Trase, Z. Xu, Z. Chen, H. Z. Tan, and J. X. J. Zhang. 2019. Flexible Electrostatic Transducers for Wearable Haptic Communication. In Proc. of IEEE World Haptics Conference (WHC).
[57]
Vicon Motion Systems Ltd UK. 2019. VICON. (2019). https://www.vicon.com Online; accessed 3 November 2019.
[58]
Unity Technologies. 2019. Unity Engine. https://unity.com/. (2019). Online; accessed 11 November 2019.
[59]
Vicon Motion Systems. 2019. Vero Series camera. https://docs.vicon.com/display/Tracker33/Compatibility+with+Vicon+Vero+cameras. (2019). Online; accessed 3 November 2019.
[60]
Vicon Motion Systems Limited. 2006. Vicon MX Hardware System Reference. http://bdml.stanford.edu/twiki/pub/Haptics/MotionDisplayKAUST/ViconHardwareReference.pdf. (2006). Online; accessed 3 November 2019.
[61]
Virtual Humans Group from University of East Anglia. 2019. JASigning. http://vhg.cmp.uea.ac.uk/tech/jas/vhg2019/CWASA-plus-gui-panel.html. (2019). Online; accessed 3 November 2019.
[62]
Oculus VR. 2019. Introducing Hand Tracking on Oculus Quest - Bringing Your Real Hands into VR. (2019). https://www.oculus.com/blog/introducing-hand-tracking-on-oculus-quest-bringing-your-real-hands-into-vr/ Online; accessed 11 November 2019.
[63]
Steven Wall and Stephen Brewster. 2006. Feeling What You Hear: Tactile Feedback for Navigation of Audio Graphs. In Proc. of CHI.
[64]
Julia White. 2019. Microsoft at MWC Barcelona: Introducing Microsoft HoloLens 2. (2019). https://blogs.microsoft.com/blog/2019/02/24/microsoft-at-mwc-barcelona-introducing-microsoft-hololens-2/ Online; accessed 3 November 2019.
[65]
Ed.D. William G. Vicars. 2020. LifePrint. https://www.lifeprint.com/asl101/topics/highschoolcurriculum.htm. (2020). Online; accessed 1 August 2020.
[66]
Gus Xia, Carter Jacobsen, Qianwen Chen, Xingdong Yang, and Roger B. Dannenberg. 2018. ShIFT: A Semi-haptic Interface for Flute Tutoring. CoRR abs/1803.06625 (2018). arXiv:1803.06625 http://arxiv.org/abs/1803.06625
[67]
Zhongkai Zhang, Jeremie Dequidt, and Christian Duriez. 2018. Vision-based sensing of external forces acting on soft robots using finite element method. IEEE Robotics and Automation Letters 3, 3 (2018), 1529--1536.
[68]
Zijie Zhu, Xuewei Wang, Aakaash Kapoor, Zhichao Zhang, Tingrui Pan, and Zhou Yu. 2018. EIS: A Wearable Device for Epidermal American Sign Language Recognition. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2, 4, Article 202 (Dec. 2018), 22 pages.

Cited By

View all
  • (2024)Weaving Physical and Physiological Sensing with Computational FabricsProceedings of the 22nd Annual International Conference on Mobile Systems, Applications and Services10.1145/3643832.3661385(748-750)Online publication date: 3-Jun-2024
  • (2024)ASL champ!: a virtual reality game with deep-learning driven sign recognitionComputers & Education: X Reality10.1016/j.cexr.2024.1000594(100059)Online publication date: 2024
  • (2024)Learning sign language with mixed reality applications - the exploratory case study with deaf studentsEducation and Information Technologies10.1007/s10639-024-12525-1Online publication date: 23-Feb-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies  Volume 4, Issue 4
December 2020
1356 pages
EISSN:2474-9567
DOI:10.1145/3444864
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 18 December 2020
Published in IMWUT Volume 4, Issue 4

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. American Sign Language
  2. Mixed reality
  3. Motion teaching

Qualifiers

  • Research-article
  • Research
  • Refereed

Funding Sources

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)497
  • Downloads (Last 6 weeks)69
Reflects downloads up to 24 Sep 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Weaving Physical and Physiological Sensing with Computational FabricsProceedings of the 22nd Annual International Conference on Mobile Systems, Applications and Services10.1145/3643832.3661385(748-750)Online publication date: 3-Jun-2024
  • (2024)ASL champ!: a virtual reality game with deep-learning driven sign recognitionComputers & Education: X Reality10.1016/j.cexr.2024.1000594(100059)Online publication date: 2024
  • (2024)Learning sign language with mixed reality applications - the exploratory case study with deaf studentsEducation and Information Technologies10.1007/s10639-024-12525-1Online publication date: 23-Feb-2024
  • (2024)Reshaping the Future of Learning Disabilities in Higher Education with AIApplied Assistive Technologies and Informatics for Students with Disabilities10.1007/978-981-97-0914-4_2(17-33)Online publication date: 29-May-2024
  • (2024)User Experience Evaluation Methods in Mixed Reality EnvironmentsSocial Computing and Social Media10.1007/978-3-031-61281-7_12(179-193)Online publication date: 1-Jun-2024
  • (2024)Mixed-Integer Programming for Adaptive VR Workflow TrainingVirtual, Augmented and Mixed Reality10.1007/978-3-031-61047-9_21(325-344)Online publication date: 29-Jun-2024
  • (2024)Immersive Technologies for Accessible User ExperiencesEncyclopedia of Computer Graphics and Games10.1007/978-3-031-23161-2_449(914-921)Online publication date: 5-Jan-2024
  • (2023)N-euro PredictorProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36108847:3(1-25)Online publication date: 27-Sep-2023
  • (2023)Supporting ASL Communication Between Hearing Parents and Deaf ChildrenProceedings of the 25th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3597638.3614511(1-5)Online publication date: 22-Oct-2023
  • (2023)Navigating the Audit Landscape: A Framework for Developing Transparent and Auditable XRProceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency10.1145/3593013.3594090(1418-1431)Online publication date: 12-Jun-2023
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Get Access

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media