Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article
Open access

ClenchClick: Hands-Free Target Selection Method Leveraging Teeth-Clench for Augmented Reality

Published: 07 September 2022 Publication History

Abstract

We propose to explore teeth-clenching-based target selection in Augmented Reality (AR), as the subtlety in the interaction can be beneficial to applications occupying the user's hand or that are sensitive to social norms. To support the investigation, we implemented an EMG-based teeth-clenching detection system (ClenchClick), where we adopted customized thresholds for different users. We first explored and compared the potential interaction design leveraging head movements and teeth clenching in combination. We finalized the interaction to take the form of a Point-and-Click manner with clenches as the confirmation mechanism. We evaluated the taskload and performance of ClenchClick by comparing it with two baseline methods in target selection tasks. Results showed that ClenchClick outperformed hand gestures in workload, physical load, accuracy and speed, and outperformed dwell in work load and temporal load. Lastly, through user studies, we demonstrated the advantage of ClenchClick in real-world tasks, including efficient and accurate hands-free target selection, natural and unobtrusive interaction in public, and robust head gesture input.

Supplementary Material

shen (shen.zip)
Supplemental movie, appendix, image and software files for, ClenchClick: Hands-Free Target Selection Method Leveraging Teeth-Clench for Augmented Reality

References

[1]
Johnny Accot and Shumin Zhai. 2002. More than dotting the i's---foundations for crossing-based interfaces. In Proceedings of the SIGCHI conference on Human factors in computing systems. 73--80.
[2]
AB Arsenault, DA Winter, and RG Marteniuk. 1986. Is there a 'normal'profile of EMG activity in gait? Medical and Biological Engineering and Computing 24, 4 (1986), 337--343.
[3]
Daniel Ashbrook, Carlos Tejada, Dhwanit Mehta, Anthony Jiminez, Goudam Muralitharam, Sangeeta Gajendra, and Ross Tallents. 2016. Bitey: An exploration of tooth click gestures for hands-free user interface control. In Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services. 158--169.
[4]
Rabya Bahadur and Saeed Ur Rehman. 2018. A Robust and Adaptive Algorithm for Real-time Muscle Activity Interval Detection using EMG Signals. In BIOSIGNALS. 89--96.
[5]
Donna R Berryman. 2012. Augmented reality: a review. Medical reference services quarterly 31, 2 (2012), 212--218.
[6]
Jonas Blattgerste, Patrick Renner, and Thies Pfeiffer. 2018. Advantages of eye-gaze over head-gaze-based selection in virtual and augmented reality under varying field of views. In Proceedings of the Workshop on Communication by Gaze Interaction. 1--9.
[7]
Michael Bohan and Alex Chaparro. 1998. To click or not to click: A comparison of two target-selection methods for HCI. In CHI 98 Conference Summary on Human Factors in Computing Systems. 219--220.
[8]
Paolo Bonato, Tommaso D'Alessio, and Marco Knaflitz. 1998. A statistical method for the measurement of muscle activation intervals from surface myoelectric signal during gait. IEEE Transactions on biomedical engineering 45, 3 (1998), 287--299.
[9]
Jaekwang Cha, Jinhyuk Kim, and Shiho Kim. 2019. Hands-free user interface for AR/VR devices exploiting wearer's facial gestures using unsupervised deep learning. Sensors 19, 20 (2019), 4441.
[10]
Ishan Chatterjee, Robert Xiao, and Chris Harrison. 2015. Gaze+ gesture: Expressive, precise and targeted free-space interactions. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction. 131--138.
[11]
Victor Chen, Xuhai Xu, Richard Li, Yuanchun Shi, Shwetak Patel, and Yuntao Wang. 2021. Understanding the Design Space of Mouth Microgestures. arXiv preprint arXiv:2106.00931 (2021).
[12]
Jingyuan Cheng, Ayano Okoso, Kai Kunze, Niels Henze, Albrecht Schmidt, Paul Lukowicz, and Koichi Kise. 2014. On the tip of my tongue: a non-invasive pressure-based tongue interface. In Proceedings of the 5th Augmented Human International Conference. 1--4.
[13]
SS Daud and R Sudirman. 2015. Butterworth bandpass and stationary wavelet transform filter comparison for electroencephalography signal. In 2015 6th international conference on intelligent systems, modelling and simulation. IEEE, 123--126.
[14]
David Dearman, Amy Karlson, Brian Meyers, and Ben Bederson. 2010. Multi-modal text entry and selection on a mobile device. In Proceedings of Graphics Interface 2010. 19--26.
[15]
Denis V Dorozhkin and Judy M Vance. 2002. Implementing speech recognition in virtual reality. In International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Vol. 36215. 61--65.
[16]
Augusto Esteves, Eduardo Velloso, Andreas Bulling, and Hans Gellersen. 2015. Orbits: Gaze interaction for smart watches using smooth pursuit eye movements. In Proceedings of the 28th annual ACM symposium on user interface software & technology. 457--466.
[17]
Dario Farina, Ning Jiang, Hubertus Rehbaum, Aleš Holobar, Bernhard Graimann, Hans Dietl, and Oskar C Aszmann. 2014. The extraction of neural information from the surface EMG for the control of upper-limb prostheses: emerging avenues and challenges. IEEE Transactions on Neural Systems and Rehabilitation Engineering 22, 4 (2014), 797--809.
[18]
Anna Maria Feit, Shane Williams, Arturo Toledo, Ann Paradiso, Harish Kulkarni, Shaun Kane, and Meredith Ringel Morris. 2017. Toward everyday gaze input: Accuracy and precision of eye tracking and implications for design. In Proceedings of the 2017 Chi conference on human factors in computing systems. 1118--1130.
[19]
Mayank Goel, Chen Zhao, Ruth Vinisha, and Shwetak N Patel. 2015. Tongue-in-cheek: Using wireless signals to enable non-intrusive and flexible facial gestures detection. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. 255--258.
[20]
Scott W Greenwald, Luke Loreti, Markus Funk, Ronen Zilberman, and Pattie Maes. 2016. Eye gaze tracking with google cardboard using purkinje images. In Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology. 19--22.
[21]
JA Guerrero and JE Macías-Díaz. 2019. An optimal bayesian threshold method for onset detection in electric biosignals. Mathematical biosciences 309 (2019), 12--22.
[22]
José A Guerrero and Jorge Eduardo Macías-Díaz. 2020. A threshold selection criterion based on the number of runs for the detection of bursts in EMG signals. Biomedical Signal Processing and Control 57 (2020), 101699.
[23]
Laura Guidetti, Gianfranco Rivellini, and Francesco Figura. 1996. EMG patterns during running: Intra-and inter-individual variability. Journal of Electromyography and Kinesiology 6, 1 (1996), 37--48.
[24]
Hiroyuki Hakoda, Wataru Yamada, and Hiroyuki Manabe. 2017. Eye tracking using built-in camera for smartphone-based HMD. In Adjunct Publication of the 30th Annual ACM Symposium on User Interface Software and Technology. 15--16.
[25]
Benjamin Hatscher, Maria Luz, Lennart E Nacke, Norbert Elkmann, Veit Müller, and Christian Hansen. 2017. GazeTap: towards hands-free interaction in the operating room. In Proceedings of the 19th ACM international conference on multimodal interaction. 243--251.
[26]
Hwan Heo, Minho Lee, Sungjei Kim, and Youngbae Hwang. 2020. Gaze+ Gesture Interface: Considering Social Acceptability. In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW). IEEE, 690--691.
[27]
Anke Huckauf and Mario H Urbina. 2011. Object selection in gaze controlled systems: What you don't look at is what you get. ACM Transactions on Applied Perception (TAP) 8, 2 (2011), 1--14.
[28]
Jinhyuk Kim, Jaekwang Cha, Hojun Lee, and Shiho Kim. 2017. Hand-free natural user interface for VR HMD with IR based facial gesture tracking sensor. In Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology. 1--2.
[29]
RFM Kleissen, JH Buurke, J Harlaar, and G Zilvold. 1998. Electromyography in the biomechanical analysis of human movement and its clinical application. Gait & posture 8, 2 (1998), 143--158.
[30]
Trupti M Kodinariya and Prashant R Makwana. 2013. Review on determining number of Cluster in K-Means Clustering. International Journal 1, 6 (2013), 90--95.
[31]
Marion Koelle, Swamy Ananthanarayan, and Susanne Boll. 2020. Social acceptability in HCI: A survey of methods, measures, and design strategies. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1--19.
[32]
Devender Kumar and Amit Sharma. 2016. Electrooculogram-based virtual reality game control using blink detection and gaze calibration. In 2016 International Conference on Advances in Computing, Communications and Informatics (ICACCI). IEEE, 2358--2362.
[33]
Mikko Kytö, Barrett Ens, Thammathip Piumsomboon, Gun A Lee, and Mark Billinghurst. 2018. Pinpointing: Precise head-and eye-based target selection for augmented reality. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 1--14.
[34]
DoYoung Lee, Youryang Lee, Yonghwan Shin, and Ian Oakley. 2018. Designing socially acceptable hand-to-face input. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology. 711--723.
[35]
Siyoung Lee, Junsoo Kim, Inyeol Yun, Geun Yeol Bae, Daegun Kim, Sangsik Park, Il-Min Yi, Wonkyu Moon, Yoonyoung Chung, and Kilwon Cho. 2019. An ultrathin conformable vibration-responsive electronic skin for quantitative vocal recognition. Nature communications 10, 1 (2019), 1--11.
[36]
Roberto Merletti and Silvia Muceli. 2019. Tutorial. Surface EMG detection in space and time: Best practices. Journal of Electromyography and Kinesiology 49 (2019), 102363.
[37]
Andrea Merlo, Dario Farina, and Roberto Merletti. 2003. A fast and reliable technique for muscle activity detection from surface EMG signals. IEEE transactions on biomedical engineering 50, 3 (2003), 316--323.
[38]
S Micera, Giancarlo Vannozzi, AngeloM Sabatini, and Paulo Dario. 2001. Improving detection of muscle activation intervals. IEEE Engineering in medicine and Biology Magazine 20, 6 (2001), 38--46.
[39]
Microsoft. 2021. Microsoft HoloLens: Mixed Reality Technology for Business. https://www.microsoft.com/en-us/hololens
[40]
Katsumi Minakata, John Paulin Hansen, I Scott MacKenzie, Per Bækgaard, and Vijay Rajanna. 2019. Pointing by gaze, head, and foot in a head-mounted display. In Proceedings of the 11th ACM symposium on eye tracking research & applications. 1--9.
[41]
Pedro Monteiro, Guilherme Gonçalves, Hugo Coelho, Miguel Melo, and Maximino Bessa. 2021. Hands-free interaction in immersive virtual reality: A systematic review. IEEE Trans. Vis. Comput. Graph. 27, 5 (2021), 2702--2713.
[42]
Takuro Nakao, Yun Suen Pai, Megumi Isogai, Hideaki Kimata, and Kai Kunze. 2018. Make-a-face: a hands-free, non-intrusive device for tongue/mouth/cheek input using emg. In ACM SIGGRAPH 2018 Posters. 1--2.
[43]
Phuc Nguyen, Nam Bui, Anh Nguyen, Hoang Truong, Abhijit Suresh, Matt Whitlock, Duy Pham, Thang Dinh, and Tam Vu. 2018. Tyth-typing on your teeth: Tongue-teeth localization for human-computer interface. In Proceedings of the 16th Annual International Conference on Mobile Systems, Applications, and Services. 269--282.
[44]
Tomi Nukarinen, Jari Kangas, Oleg Špakov, Poika Isokoski, Deepak Akkil, Jussi Rantala, and Roope Raisamo. 2016. Evaluation of HeadTurn: An interaction technique using the gaze and head turns. In Proceedings of the 9th Nordic Conference on Human-Computer Interaction. 1--8.
[45]
Oculus. 2021. Oculus|Headsets,Games and Equipments. https://www.oculus.com
[46]
Jay Prakash, Zhijian Yang, Yu-Lin Wei, Haitham Hassanieh, and Romit Roy Choudhury. 2020. EarSense: earphones as a teeth activity sensor. In Proceedings of the 26th Annual International Conference on Mobile Computing and Networking. 1--13.
[47]
Javier Ramirez, Juan Manuel Górriz, and José Carlos Segura. 2007. Voice activity detection. fundamentals and speech recognition system robustness. Robust speech recognition and understanding 6, 9 (2007), 1--22.
[48]
Mamun Bin Ibne Reaz, M Sazzad Hussain, and Faisal Mohd-Yasin. 2006. Techniques of EMG signal analysis: detection, processing, classification and applications. Biological procedures online 8, 1 (2006), 11--35.
[49]
Javier San Agustin, John Paulin Hansen, Dan Witzner Hansen, and Henrik Skovsgaard. 2009. Low-cost gaze pointing and EMG clicking. In CHI'09 Extended Abstracts on Human Factors in Computing Systems. 3247--3252.
[50]
Flurin Stauffer, Moritz Thielen, Christina Sauter, Séverine Chardonnens, Simon Bachmann, Klas Tybrandt, Christian Peters, Christofer Hierold, and Janos Vörös. 2018. Skin conformal polymer electrodes for clinical ECG and EEG recordings. Advanced healthcare materials 7, 7 (2018), 1700994.
[51]
Kazuhiro Taniguchi, Hisashi Kondo, Mami Kurosawa, and Atsushi Nishikawa. 2018. Earable TEMPO: a novel, hands-free input device that uses the movement of the tongue measured with a wearable ear sensor. Sensors 18, 3 (2018), 733.
[52]
AJ Thexton. 1996. A randomisation method for discriminating between signal and noise in recordings of rhythmic electromyographic activity. Journal of neuroscience methods 66, 2 (1996), 93--98.
[53]
Andrea Tigrini, Alessandro Mengarelli, Stefano Cardarelli, Sandro Fioretti, and Federica Verdini. 2020. Improving EMG Signal Change Point Detection for Low SNR by Using Extended Teager-Kaiser Energy Operator. IEEE Transactions on Medical Robotics and Bionics 2, 4 (2020), 661--669.
[54]
Andries Van Der Bilt, Anneke Tekamp, Hilbert Van Der Glas, and Jan Abbink. 2008. Bite force and electromyograpy during maximum unilateral and bilateral clenching. European journal of oral sciences 116, 3 (2008), 217--222.
[55]
DWF Van Krevelen and Ronald Poelman. 2010. A survey of augmented reality technologies, applications and limitations. International journal of virtual reality 9, 2 (2010), 1--20.
[56]
Tomás Vega Gálvez, Shardul Sapkota, Alexandru Dancu, and Pattie Maes. 2019. Byte. it: discreet teeth gestures for mobile device interaction. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems. 1--6.
[57]
Ker-Jiun Wang, Quanbo Liu, Yifan Zhao, Caroline Yan Zheng, Soumya Vhasure, Quanfeng Liu, Prakash Thakur, Mingui Sun, and Zhi-Hong Mao. 2018. Intelligent wearable virtual reality (VR) gaming controller for people with motor disabilities. In 2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR). IEEE, 161--164.
[58]
Xiangyu Wang, Soh K Ong, and Andrew YC Nee. 2016. A comprehensive survey of augmented reality assembly research. Advances in Manufacturing 4, 1 (2016), 1--22.
[59]
Jacob O Wobbrock, Meredith Ringel Morris, and Andrew D Wilson. 2009. User-defined gestures for surface computing. In Proceedings of the SIGCHI conference on human factors in computing systems. 1083--1092.
[60]
Lanyi Xu and Andy Adler. 2004. An improved method for muscle activation detection during gait. In Canadian Conference on Electrical and Computer Engineering 2004 (IEEE Cat. No. 04CH37513), Vol. 1. IEEE, 357--360.
[61]
Xuhai Xu, Chun Yu, Anind K Dey, and Jennifer Mankoff. 2019. Clench Interface: Novel Biting Input Techniques. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1--12.
[62]
Yukang Yan, Yingtian Shi, Chun Yu, and Yuanchun Shi. 2020. Headcross: Exploring head-based crossing selection on head-mounted displays. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 4, 1 (2020), 1--22.
[63]
Yukang Yan, Chun Yu, Xiaojuan Ma, Shuai Huang, Hasan Iqbal, and Yuanchun Shi. 2018. Eyes-Free Target Acquisition in Interaction Space around the Body for Virtual Reality. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI '18). Association for Computing Machinery, New York, NY, USA, 1--13. https://doi.org/10.1145/3173574.3173616
[64]
Yukang Yan, Chun Yu, Xin Yi, and Yuanchun Shi. 2018. HeadGesture: Hands-Free Input Approach Leveraging Head Movements for HMD Devices. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2, 4, Article 198 (dec 2018), 23 pages. https://doi.org/10.1145/3287076
[65]
Chun Yu, Yizheng Gu, Zhican Yang, Xin Yi, Hengliang Luo, and Yuanchun Shi. 2017. Tap, dwell or gesture? Exploring head-based text entry techniques for HMDs. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. 4479--4488.
[66]
Difeng Yu, Hai-Ning Liang, Feiyu Lu, Vijayakumar Nanjappan, Konstantinos Papangelis, and Wei Wang. 2018. Target Selection in Head-Mounted Display Virtual Reality Environments. J. Univers. Comput. Sci. 24, 9 (2018), 1217--1243.
[67]
Qiao Zhang, Shyamnath Gollakota, Ben Taskar, and Raj PN Rao. 2014. Non-intrusive tongue machine interface. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2555--2558.
[68]
Lin Zhong, Dania El-Daye, Brett Kaufman, Nick Tobaoda, Tamer Mohamed, and Michael Liebschner. 2007. OsteoConduct: Wireless body-area communication based on bone conduction. In Proceedings of the ICST 2nd international conference on Body area networks. 1--8.

Cited By

View all
  • (2024)GazePuffer: Hands-Free Input Method Leveraging Puff Cheeks for VR2024 IEEE Conference Virtual Reality and 3D User Interfaces (VR)10.1109/VR58804.2024.00055(331-341)Online publication date: 16-Mar-2024
  • (2024)Eye-Hand Typing: Eye Gaze Assisted Finger Typing via Bayesian Processes in ARIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2024.337210630:5(2496-2506)Online publication date: 19-Mar-2024
  • (2024)TeethFa: Real-Time, Hand-Free Teeth Gestures Interaction Using Fabric SensorsIEEE Internet of Things Journal10.1109/JIOT.2024.343465711:21(35223-35237)Online publication date: 1-Nov-2024
  • Show More Cited By

Index Terms

  1. ClenchClick: Hands-Free Target Selection Method Leveraging Teeth-Clench for Augmented Reality

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
      Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies  Volume 6, Issue 3
      September 2022
      1612 pages
      EISSN:2474-9567
      DOI:10.1145/3563014
      Issue’s Table of Contents
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 07 September 2022
      Published in IMWUT Volume 6, Issue 3

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. EMG sensing
      2. augmented reality
      3. hands-free interaction
      4. target selection

      Qualifiers

      • Research-article
      • Research
      • Refereed

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)291
      • Downloads (Last 6 weeks)35
      Reflects downloads up to 27 Nov 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)GazePuffer: Hands-Free Input Method Leveraging Puff Cheeks for VR2024 IEEE Conference Virtual Reality and 3D User Interfaces (VR)10.1109/VR58804.2024.00055(331-341)Online publication date: 16-Mar-2024
      • (2024)Eye-Hand Typing: Eye Gaze Assisted Finger Typing via Bayesian Processes in ARIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2024.337210630:5(2496-2506)Online publication date: 19-Mar-2024
      • (2024)TeethFa: Real-Time, Hand-Free Teeth Gestures Interaction Using Fabric SensorsIEEE Internet of Things Journal10.1109/JIOT.2024.343465711:21(35223-35237)Online publication date: 1-Nov-2024
      • (2023)Fingerprinting IoT Devices Using Latent Physical Side-ChannelsProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/35962477:2(1-26)Online publication date: 12-Jun-2023

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Login options

      Full Access

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media