Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3672539.3686336acmconferencesArticle/Chapter ViewAbstractPublication PagesuistConference Proceedingsconference-collections
poster

Micro-Gesture Recognition of Tongue via Bone Conduction Sound

Published: 13 October 2024 Publication History

Abstract

We propose a hands-free and less perceptible gesture sensing method of the tongue by capturing the bone conduction sound generated when the tongue rubs the teeth. The sound is captured by the bone conduction microphones attached behind the ears. In this work, we show that tongue slide, snap, and teeth click gestures can be classified using the decision tree algorithm, which focuses on the characteristics in the sound spectrogram. We conducted a preliminary experiment to verify that input methods for mouth microgesture devices using bone conduction can be expanded from only teeth to teeth and tongue gestures without any additional obtrusive sensors. The evaluation revealed that our method achieved a classification accuracy of 82.7% with user-specific parameter adjustment.

References

[1]
Victor Chen, Xuhai Xu, Richard Li, Yuanchun Shi, Shwetak Patel, and Yuntao Wang. 2021. Understanding the Design Space of Mouth Microgestures. In Proceedings of the 2021 ACM Designing Interactive Systems Conference (Virtual Event, USA) (DIS ’21). Association for Computing Machinery, New York, NY, USA, 1068–1081. https://doi.org/10.1145/3461778.3462004
[2]
Jingyuan Cheng, Ayano Okoso, Kai Kunze, Niels Henze, Albrecht Schmidt, Paul Lukowicz, and Koichi Kise. 2014. On the Tip of My Tongue: A Non-Invasive Pressure-Based Tongue Interface. In Proceedings of the 5th Augmented Human International Conference (Kobe, Japan) (AH ’14). Association for Computing Machinery, New York, NY, USA, Article 12, 4 pages. https://doi.org/10.1145/2582051.2582063
[3]
B. Denby, T. Schultz, K. Honda, T. Hueber, J.M. Gilbert, and J.S. Brumberg. 2010. Silent speech interfaces. Speech Communication 52, 4 (2010), 270–287. https://doi.org/10.1016/j.specom.2009.08.002
[4]
Morten Lund Dybdal, Javier San Agustin, and John Paulin Hansen. 2012. Gaze Input for Mobile Devices by Dwell and Gestures. In Proceedings of the Symposium on Eye Tracking Research and Applications (Santa Barbara, California) (ETRA ’12). Association for Computing Machinery, New York, NY, USA, 225–228. https://doi.org/10.1145/2168556.2168601
[5]
Susumu Harada, James A. Landay, Jonathan Malkin, Xiao Li, and Jeff A. Bilmes. 2006. The Vocal Joystick: Evaluation of Voice-Based Cursor Control Techniques. In Proceedings of the 8th International ACM SIGACCESS Conference on Computers and Accessibility (Portland, Oregon, USA) (Assets ’06). Association for Computing Machinery, New York, NY, USA, 197–204. https://doi.org/10.1145/1168987.1169021
[6]
Richard Li, Jason Wu, and Thad Starner. 2019. TongueBoard: An Oral Interface for Subtle Input. In Proceedings of the 10th Augmented Human International Conference 2019 (Reims, France) (AH2019). Association for Computing Machinery, New York, NY, USA, Article 1, 9 pages. https://doi.org/10.1145/3311823.3311831
[7]
Phuc Nguyen, Nam Bui, Anh Nguyen, Hoang Truong, Abhijit Suresh, Matt Whitlock, Duy Pham, Thang Dinh, and Tam Vu. 2018. TYTH-Typing On Your Teeth: Tongue-Teeth Localization for Human-Computer Interface. In Proceedings of the 16th Annual International Conference on Mobile Systems, Applications, and Services (Munich, Germany) (MobiSys ’18). Association for Computing Machinery, New York, NY, USA, 269–282. https://doi.org/10.1145/3210240.3210322
[8]
Jay Prakash, Zhijian Yang, Yu-Lin Wei, Haitham Hassanieh, and Romit Roy Choudhury. 2020. EarSense: Earphones as a Teeth Activity Sensor. In Proceedings of the 26th Annual International Conference on Mobile Computing and Networking (London, United Kingdom) (MobiCom ’20). Association for Computing Machinery, New York, NY, USA, Article 40, 13 pages. https://doi.org/10.1145/3372224.3419197
[9]
Himanshu Sahni, Abdelkareem Bedri, Gabriel Reyes, Pavleen Thukral, Zehua Guo, Thad Starner, and Maysam Ghovanloo. 2014. The Tongue and Ear Interface: A Wearable System for Silent Speech Recognition. In Proceedings of the 2014 ACM International Symposium on Wearable Computers (Seattle, Washington) (ISWC ’14). Association for Computing Machinery, New York, NY, USA, 47–54. https://doi.org/10.1145/2634317.2634322
[10]
T. Scott Saponas, Daniel Kelly, Babak A. Parviz, and Desney S. Tan. 2009. Optically Sensing Tongue Gestures for Computer Input. In Proceedings of the 22nd Annual ACM Symposium on User Interface Software and Technology (Victoria, BC, Canada) (UIST ’09). Association for Computing Machinery, New York, NY, USA, 177–180. https://doi.org/10.1145/1622176.1622209
[11]
Tomás Vega Gálvez, Shardul Sapkota, Alexandru Dancu, and Pattie Maes. 2019. Byte.It: Discreet Teeth Gestures for Mobile Device Interaction. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI EA ’19). Association for Computing Machinery, New York, NY, USA, 1–6. https://doi.org/10.1145/3290607.3312925
[12]
Qiao Zhang, Shyamnath Gollakota, Ben Taskar, and Raj P.N. Rao. 2014. Non-Intrusive Tongue Machine Interface. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Toronto, Ontario, Canada) (CHI ’14). Association for Computing Machinery, New York, NY, USA, 2555–2558. https://doi.org/10.1145/2556288.2556981

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
UIST Adjunct '24: Adjunct Proceedings of the 37th Annual ACM Symposium on User Interface Software and Technology
October 2024
394 pages
ISBN:9798400707186
DOI:10.1145/3672539
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 13 October 2024

Check for updates

Author Tags

  1. bone conduction
  2. mouth gesture recognition
  3. teeth/tongue input
  4. wearable sensing

Qualifiers

  • Poster
  • Research
  • Refereed limited

Funding Sources

  • This is a joint research project with a private company.

Conference

UIST '24

Acceptance Rates

Overall Acceptance Rate 355 of 1,733 submissions, 20%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 88
    Total Downloads
  • Downloads (Last 12 months)88
  • Downloads (Last 6 weeks)88
Reflects downloads up to 23 Nov 2024

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media