Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Exploring Gaze for Assisting Freehand Selection-based Text Entry in AR

Published: 13 May 2022 Publication History

Abstract

With eye-tracking increasingly available in Augmented Reality, we explore how gaze can be used to assist freehand gestural text entry. Here the eyes are often coordinated with manual input across the spatial positions of the keys. Inspired by this, we investigate gaze-assisted selection-based text entry through the concept of spatial alignment of both modalities. Users can enter text by aligning both gaze and manual pointer at each key, as a novel alternative to existing dwell-time or explicit manual triggers. We present a text entry user study comparing two of such alignment techniques to a gaze-only and a manual-only baseline. The results show that one alignment technique reduces physical finger movement by more than half compared to standard in-air finger typing, and is faster and exhibits less perceived eye fatigue than an eyes-only dwell-time technique. We discuss trade-offs between uni and multimodal text entry techniques, pointing to novel ways to integrate eye movements to facilitate virtual text entry.

Supplementary Material

MP4 File (v6etra141.mp4)
Supplemental video
MP4 File (S1_Exploring Gaze for Assisting Freehand.mp4)
Conference Presentation (ETRA Long Papers) of title={Exploring Gaze for Assisting Freehand Selection-based Text Entry in AR}; authors={Mathias N. Lystbæk, Ken Pfeuffer, Jens Emil Sloth Grønbæk, and Hans Gellersen}; doi=10.1145/3530882

References

[1]
Richard A Abrams, David E Meyer, and Sylvan Kornblum. 1990. Eye-hand coordination: oculomotor control in rapid aimed limb movements. Journal of experimental psychology: human perception and performance 16, 2 (1990), 248.
[2]
Ferran Argelaguet and Carlos Andujar. 2013. A survey of 3D object selection techniques for virtual environments. Computers & Graphics 37, 3 (2013), 121--136.
[3]
Ana M Bernardos, David Gómez, and José R Casar. 2016. A comparison of head pose and deictic pointing interaction methods for smart environments. International Journal of Human-Computer Interaction 32, 4 (2016), 325--351.
[4]
Eric A. Bier, Maureen C. Stone, Ken Pier, Ken Fishkin, Thomas Baudel, Matt Conway, William Buxton, and Tony DeRose. 1994. Toolglass and Magic Lenses: The See-through Interface. In Conference Companion on Human Factors in Computing Systems (Boston, Massachusetts, USA) (CHI '94). Association for Computing Machinery, New York, NY, USA, 445--446. https://doi.org/10.1145/259963.260447
[5]
Gunnar Borg. 1998. Borg's Perceived Exertion And Pain Scales.
[6]
Lacey Colligan, Henry W.W. Potts, Chelsea T. Finn, and Robert A. Sinkin. 2015. Cognitive workload changes for nurses transitioning from a legacy system with paper documentation to a commercial electronic health record. International Journal of Medical Informatics 84, 7 (2015), 469--476. https://doi.org/10.1016/j.ijmedinf.2015.03.003
[7]
Heiko Drewes and Albrecht Schmidt. 2007. Interacting with the computer using gaze gestures. In IFIP Conference on Human-Computer Interaction. Springer, 475--488.
[8]
J. Dudley, H. Benko, D. Wigdor, and P. Kristensson. 2019. Performance Envelopes of Virtual Keyboard Text Input Strategies in Virtual Reality. In 2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). IEEE Computer Society, Los Alamitos, CA, USA, 289--300. https://doi.org/10.1109/ISMAR.2019.00027
[9]
Wenxin Feng, Jiangnan Zou, Andrew Kurauchi, Carlos H Morimoto, and Margrit Betke. 2021. HGaze Typing: HeadGesture Assisted Gaze Typing. In ACM Symposium on Eye Tracking Research and Applications (Virtual Event, Germany) (ETRA '21 Full Papers). Association for Computing Machinery, New York, NY, USA, Article 11, 11 pages. https: //doi.org/10.1145/3448017.3457379
[10]
Gabriel González, José P Molina, Arturo S García, Diego Martínez, and Pascual González. 2009. Evaluation of text input techniques in immersive virtual environments. In New Trends on Human--Computer Interaction. Springer, 109--118.
[11]
Poika Isokoski. 2000. Text Input Methods for Eye Trackers Using Off-Screen Targets. In Proceedings of the 2000 Symposium on Eye Tracking Research & Applications (Palm Beach Gardens, Florida, USA) (ETRA '00). Association for Computing Machinery, New York, NY, USA, 15--21. https://doi.org/10.1145/355017.355020
[12]
Robert J. K. Jacob. 1990. What You Look at is What You Get: Eye Movement-Based Interaction Techniques. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Seattle, Washington, USA) (CHI '90). Association for Computing Machinery, New York, NY, USA, 11--18. https://doi.org/10.1145/97243.97246
[13]
Robert J. K. Jacob. 1991. The Use of Eye Movements in Human-Computer Interaction Techniques: What You Look at is What You Get. ACM Trans. Inf. Syst. 9, 2 (April 1991), 152--169. https://doi.org/10.1145/123078.128728
[14]
Roland S Johansson, Göran Westling, Anders Bäckström, and J Randall Flanagan. 2001. Eye--hand coordination in object manipulation. Journal of neuroscience 21, 17 (2001), 6917--6932.
[15]
Per Ola Kristensson and Keith Vertanen. 2012. The Potential of Dwell-Free Eye-Typing for Fast Assistive Gaze Communication. In Proceedings of the Symposium on Eye Tracking Research and Applications (Santa Barbara, California) (ETRA '12). Association for Computing Machinery, New York, NY, USA, 241--244. https://doi.org/10.1145/2168556. 2168605
[16]
Andrew Kurauchi, Wenxin Feng, Ajjen Joshi, Carlos Morimoto, and Margrit Betke. 2016. EyeSwipe: Dwell-Free Text Entry Using Gaze Paths. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (San Jose, California, USA) (CHI '16). Association for Computing Machinery, New York, NY, USA, 1952--1956. https: //doi.org/10.1145/2858036.2858335
[17]
Andrew Kurauchi, Wenxin Feng, Ajjen Joshi, Carlos Morimoto, and Margrit Betke. 2016. EyeSwipe: Dwell-Free Text Entry Using Gaze Paths. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (San Jose, California, USA) (CHI '16). Association for Computing Machinery, New York, NY, USA, 1952--1956. https: //doi.org/10.1145/2858036.2858335
[18]
Mikko Kytö, Barrett Ens, Thammathip Piumsomboon, Gun A. Lee, and Mark Billinghurst. 2018. Pinpointing: Precise Head- and Eye-Based Target Selection for Augmented Reality. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI '18). Association for Computing Machinery, New York, NY, USA, 1--14. https://doi.org/10.1145/3173574.3173655
[19]
Michael Land, Neil Mennie, and Jennifer Rusted. 1999. The roles of vision and eye movements in the control of activities of daily living. Perception 28, 11 (1999), 1311--1328.
[20]
Gun A. Lee, Mark Billinghurst, and Gerard Jounghyun Kim. 2004. Occlusion Based Interaction Methods for Tangible Augmented Reality Environments. In Proceedings of the 2004 ACM SIGGRAPH International Conference on Virtual Reality Continuum and Its Applications in Industry (Singapore) (VRCAI '04). Association for Computing Machinery, PACM on Human-Computer Interaction, Vol. 6, No. ETRA, Article 141. Publication date: May 2022. Exploring Gaze for Assisting Freehand Selection-based Text Entry in AR 141:15 New York, NY, USA, 419--426. https://doi.org/10.1145/1044588.1044680
[21]
Xueshi Lu, Difeng Yu, Hai-Ning Liang, Wenge Xu, Yuzheng Chen, Xiang Li, and Khalad Hasan. 2020. Exploration of Hands-free Text Entry Techniques For Virtual Reality. In 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). 344--349. https://doi.org/10.1109/ISMAR50242.2020.00061
[22]
Mathias Lystbæk, Peter Rosenberg, Ken Pfueffer, Jens Emil Grnbæk, and Hans Gellersen. 2022. Gaze-Hand Alignment: Combining Eye Gaze and Mid-Air Pointing for Menu-based Input in Augmented Reality. In ACM Symposium on Eye Tracking Research and Applications (Seattle, USA) (ETRA '22 Full Papers). Association for Computing Machinery, New York, NY, USA, Article 145. https://doi.org/10.1145/3530886
[23]
I. Scott MacKenzie and R. William Soukoreff. 2003. Phrase Sets for Evaluating Text Entry Techniques. In CHI '03 Extended Abstracts on Human Factors in Computing Systems (Ft. Lauderdale, Florida, USA) (CHI EA '03). Association for Computing Machinery, New York, NY, USA, 754--755. https://doi.org/10.1145/765891.765971
[24]
Päivi Majaranta, Ulla-Kaija Ahola, and Oleg pakov. 2009. Fast Gaze Typing with an Adjustable Dwell Time. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Boston, MA, USA) (CHI '09). Association for Computing Machinery, New York, NY, USA, 357--360. https://doi.org/10.1145/1518701.1518758
[25]
Päivi Majaranta and Kari-Jouko Räihä. 2002. Twenty Years of Eye Typing: Systems and Design Issues. In Proceedings of the 2002 Symposium on Eye Tracking Research & Applications (New Orleans, Louisiana) (ETRA '02). Association for Computing Machinery, New York, NY, USA, 15--22. https://doi.org/10.1145/507072.507076
[26]
Anders Markussen, Mikkel Rønne Jakobsen, and Kasper Hornbæk. 2013. Selection-Based Mid-Air Text Entry on Large Displays. In INTERACT.
[27]
Darius Miniotas, Oleg pakov, Ivan Tugoy, and I. Scott MacKenzie. 2006. Speech-augmented Eye Gaze Interaction with Small Closely Spaced Targets. In Proceedings of the 2006 Symposium on Eye Tracking Research & Applications (ETRA '06). ACM, New York, NY, USA, 67--72. https://doi.org/10.1145/1117309.1117345
[28]
Martez E. Mott, Shane Williams, Jacob O. Wobbrock, and Meredith Ringel Morris. 2017. Improving Dwell-Based Gaze Typing with Dynamic, Cascading Dwell Times. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (Denver, Colorado, USA) (CHI '17). Association for Computing Machinery, New York, NY, USA, 2558--2570. https://doi.org/10.1145/3025453.3025517
[29]
Diogo Pedrosa, Maria Da Graça Pimentel, Amy Wright, and Khai N. Truong. 2015. Filteryedping: Design Challenges and User Performance of Dwell-Free Eye Typing. ACM Trans. Access. Comput. 6, 1, Article 3 (mar 2015), 37 pages. https://doi.org/10.1145/2724728
[30]
Ken Pfeuffer, Jason Alexander, Ming Ki Chong, and Hans Gellersen. 2014. Gaze-Touch: Combining Gaze with MultiTouch for Interaction on the Same Surface. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology (Honolulu, Hawaii, USA) (UIST '14). Association for Computing Machinery, New York, NY, USA, 509--518. https://doi.org/10.1145/2642918.2647397
[31]
Ken Pfeuffer, Jason Alexander, Ming Ki Chong, Yanxia Zhang, and Hans Gellersen. 2015. Gaze-Shifting: Direct-Indirect Input with Pen and Touch Modulated by Gaze. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology (Charlotte, NC, USA) (UIST '15). Association for Computing Machinery, New York, NY, USA, 373--383. https://doi.org/10.1145/2807442.2807460
[32]
Ken Pfeuffer, Benedikt Mayer, Diako Mardanbegi, and Hans Gellersen. 2017. Gaze + Pinch Interaction in Virtual Reality. In Proceedings of the 5th Symposium on Spatial User Interaction (Brighton, United Kingdom) (SUI '17). Association for Computing Machinery, New York, NY, USA, 99--108. https://doi.org/10.1145/3131277.3132180
[33]
Ken Pfeuffer, Lukas Mecke, Sarah Delgado Rodriguez, Mariam Hassib, Hannah Maier, and Florian Alt. 2020. Empirical Evaluation of Gaze-Enhanced Menus in Virtual Reality. In 26th ACM Symposium on Virtual Reality Software and Technology (Virtual Event, Canada) (VRST '20). Association for Computing Machinery, New York, NY, USA, Article 20, 11 pages. https://doi.org/10.1145/3385956.3418962
[34]
Jeffrey S. Pierce, Andrew S. Forsberg, Matthew J. Conway, Seung Hong, Robert C. Zeleznik, and Mark R. Mine. 1997. Image Plane Interaction Techniques in 3D Immersive Environments. In Proceedings of the 1997 Symposium on Interactive 3D Graphics (Providence, Rhode Island, USA) (I3D '97). Association for Computing Machinery, New York, NY, USA, 39--ff. https://doi.org/10.1145/253284.253303
[35]
Daniel Rough, Keith Vertanen, and Per Ola Kristensson. 2014. An Evaluation of Dasher with a High-Performance Language Model as a Gaze Communication Method. In Proceedings of the 2014 International Working Conference on Advanced Visual Interfaces (Como, Italy) (AVI '14). Association for Computing Machinery, New York, NY, USA, 169--176. https://doi.org/10.1145/2598153.2598157
[36]
Linda E. Sibert and Robert J. K. Jacob. 2000. Evaluation of Eye Gaze Interaction. In Proc. SIGCHI Conference on Human Factors in Computing Systems (The Hague, The Netherlands) (CHI '00). ACM, New York, USA, 281--288. https://doi.org/10.1145/332040.332445
[37]
Ludwig Sidenmark and Anders Lundström. 2019. Gaze Behaviour on Interacted Objects during Hand Interaction in Virtual Reality for Eye Tracking Calibration. In Proceedings of the 11th ACM Symposium on Eye Tracking Research & PACM on Human-Computer Interaction, Vol. 6, No. ETRA, Article 141. Publication date: May 2022. 141:16 Mathias N. Lystbæk et al. Applications (Denver, Colorado) (ETRA '19). Association for Computing Machinery, New York, NY, USA, Article 6, 9 pages. https://doi.org/10.1145/3314111.3319815
[38]
R. William Soukoreff and I. Scott MacKenzie. 2003. Metrics for Text Entry Research: An Evaluation of MSD and KSPC, and a New Unified Error Metric. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Ft. Lauderdale, Florida, USA) (CHI '03). Association for Computing Machinery, New York, NY, USA, 113--120. https://doi.org/10.1145/642611.642632
[39]
Marco Speicher, Anna Maria Feit, Pascal Ziegler, and Antonio Krüger. 2018. Selection-Based Text Entry in Virtual Reality. Association for Computing Machinery, New York, NY, USA, 1--13. https://doi.org/10.1145/3173574.3174221
[40]
Sophie Stellmach and Raimund Dachselt. 2012. Look & Touch: Gaze-Supported Target Acquisition. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Austin, Texas, USA) (CHI '12). Association for Computing Machinery, New York, NY, USA, 2981--2990. https://doi.org/10.1145/2207676.2208709
[41]
Antonio Diaz Tula, Filipe M. S. de Campos, and Carlos H. Morimoto. 2012. Dynamic Context Switching for Gaze Based Interaction. In Proceedings of the Symposium on Eye Tracking Research and Applications (Santa Barbara, California) (ETRA '12). Association for Computing Machinery, New York, NY, USA, 353--356. https://doi.org/10.1145/2168556.2168635
[42]
Boris Velichkovsky, Andreas Sprenger, and Pieter Unema. 1997. Towards gaze-mediated interaction: Collecting solutions of the "Midas touch problem". In Human-Computer Interaction INTERACT '97. Springer, Boston, MA, 509--516. https://doi.org/10.1007/978-0--387--35175--9_77
[43]
Jean-Louis Vercher, G Magenes, C Prablanc, and GM Gauthier. 1994. Eye-head-hand coordination in pointing at visual targets: spatial and temporal analysis. Experimental brain research 99, 3 (1994), 507--523.
[44]
David J Ward and David JC MacKay. 2002. Fast hands-free writing by gaze direction. Nature 418, 6900 (2002), 838--838.
[45]
Pierre Weill-Tessier and Hans Gellersen. 2017. Touch input and gaze correlation on tablets. In International Conference on Intelligent Decision Technologies. Springer, Springer International Publishing, Cham, 287--296.
[46]
Jacob O. Wobbrock, James Rubinstein, Michael W. Sawyer, and Andrew T. Duchowski. 2008. Longitudinal Evaluation of Discrete Consecutive Gaze Gestures for Text Entry. In Proceedings of the 2008 Symposium on Eye Tracking Research & Applications (Savannah, Georgia) (ETRA '08). Association for Computing Machinery, New York, NY, USA, 11--18. https://doi.org/10.1145/1344471.1344475
[47]
Wenge Xu, Hai-Ning Liang, Anqi He, and Zifan Wang. 2019. Pointing and Selection Methods for Text Entry in Augmented Reality Head Mounted Displays. In 2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). 279--288. https://doi.org/10.1109/ISMAR.2019.00026
[48]
Chun Yu, Yizheng Gu, Zhican Yang, Xin Yi, Hengliang Luo, and Yuanchun Shi. 2017. Tap, Dwell or Gesture? Exploring Head-Based Text Entry Techniques for HMDs. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (Denver, Colorado, USA) (CHI '17). Association for Computing Machinery, New York, NY, USA, 4479--4488. https://doi.org/10.1145/3025453.3025964
[49]
Shumin Zhai and Per Ola Kristensson. 2012. The Word-Gesture Keyboard: Reimagining Keyboard Interaction. Commun. ACM 55, 9 (sep 2012), 91--101. https://doi.org/10.1145/2330667.2330689
[50]
Shumin Zhai, Carlos Morimoto, and Steven Ihde. 1999. Manual and Gaze Input Cascaded (MAGIC) Pointing. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Pittsburgh, Pennsylvania, USA) (CHI '99). Association for Computing Machinery, New York, NY, USA, 246--253. https://doi.org/10.1145/302979.303053

Cited By

View all
  • (2024)Comparison of Unencumbered Interaction Technique for Head-Mounted DisplaysProceedings of the ACM on Human-Computer Interaction10.1145/36981468:ISS(500-516)Online publication date: 24-Oct-2024
  • (2024)The Impact of Gaze and Hand Gesture Complexity on Gaze-Pinch Interaction PerformancesCompanion of the 2024 on ACM International Joint Conference on Pervasive and Ubiquitous Computing10.1145/3675094.3678990(622-626)Online publication date: 5-Oct-2024
  • (2024)Hands-on, Hands-off: Gaze-Assisted Bimanual 3D InteractionProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676331(1-12)Online publication date: 13-Oct-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Proceedings of the ACM on Human-Computer Interaction
Proceedings of the ACM on Human-Computer Interaction  Volume 6, Issue ETRA
ETRA
May 2022
198 pages
EISSN:2573-0142
DOI:10.1145/3537904
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 13 May 2022
Published in PACMHCI Volume 6, Issue ETRA

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. augmented reality
  2. eye-tracking
  3. gaze interaction
  4. multimodal ui
  5. text entry
  6. virtual keyboard

Qualifiers

  • Research-article

Funding Sources

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)286
  • Downloads (Last 6 weeks)19
Reflects downloads up to 16 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Comparison of Unencumbered Interaction Technique for Head-Mounted DisplaysProceedings of the ACM on Human-Computer Interaction10.1145/36981468:ISS(500-516)Online publication date: 24-Oct-2024
  • (2024)The Impact of Gaze and Hand Gesture Complexity on Gaze-Pinch Interaction PerformancesCompanion of the 2024 on ACM International Joint Conference on Pervasive and Ubiquitous Computing10.1145/3675094.3678990(622-626)Online publication date: 5-Oct-2024
  • (2024)Hands-on, Hands-off: Gaze-Assisted Bimanual 3D InteractionProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676331(1-12)Online publication date: 13-Oct-2024
  • (2024)Keep Your Eyes on the Target: Enhancing Immersion and Usability by Designing Natural Object Throwing with Gaze-based TargetingProceedings of the 2024 Symposium on Eye Tracking Research and Applications10.1145/3649902.3653338(1-7)Online publication date: 4-Jun-2024
  • (2024)Body Language for VUIs: Exploring Gestures to Enhance Interactions with Voice User InterfacesProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3660691(133-150)Online publication date: 1-Jul-2024
  • (2024)Eye-Hand Typing: Eye Gaze Assisted Finger Typing via Bayesian Processes in ARIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2024.337210630:5(2496-2506)Online publication date: May-2024
  • (2024)Evaluating Target Expansion for Eye Pointing TasksInteracting with Computers10.1093/iwc/iwae00436:4(209-223)Online publication date: 27-Feb-2024
  • (2024)Gaze analysisImage and Vision Computing10.1016/j.imavis.2024.104961144:COnline publication date: 1-Apr-2024
  • (2024)MASTER-XR: Mixed Reality Ecosystem for Teaching Robotics in ManufacturingIntegrated Systems: Data Driven Engineering10.1007/978-3-031-53652-6_10(167-182)Online publication date: 17-Sep-2024
  • (2023)GazeRayCursor: Facilitating Virtual Reality Target Selection by Blending Gaze and Controller RaycastingProceedings of the 29th ACM Symposium on Virtual Reality Software and Technology10.1145/3611659.3615693(1-11)Online publication date: 9-Oct-2023
  • Show More Cited By

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media