Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3491102.3501977acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

Lattice Menu: A Low-Error Gaze-Based Marking Menu Utilizing Target-Assisted Gaze Gestures on a Lattice of Visual Anchors

Published: 29 April 2022 Publication History

Abstract

We present Lattice Menu, a gaze-based marking menu utilizing a lattice of visual anchors that helps perform accurate gaze pointing for menu item selection. Users who know the location of the desired item can leverage target-assisted gaze gestures for multilevel item selection by looking at visual anchors over the gaze trajectories. Our evaluation showed that Lattice Menu exhibits a considerably low error rate (~1%) and a quick menu selection time (1.3-1.6 s) for expert usage across various menu structures (4 × 4 × 4 and 6 × 6 × 6) and sizes (8, 10 and 12°). In comparison with a traditional gaze-based marking menu that does not utilize visual targets, Lattice Menu showed remarkably (~5 times) fewer menu selection errors for expert usage. In a post-interview, all 12 subjects preferred Lattice Menu, and most subjects (8 out of 12) commented that the provisioning of visual targets facilitated more stable menu selections with reduced eye fatigue.

Supplemental Material

References

[1]
Sunggeun Ahn, Stephanie Santosa, Mark Parent, Daniel Wigdor, Tovi Grossman, and Marcello Giordano. 2021. StickyPie: A Gaze-Based, Scale-Invariant Marking Menu Optimized for AR/VR. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–16.
[2]
AH Bell, MA Meredith, AJ Van Opstal, and DougP Munoz. 2006. Stimulus intensity modifies saccadic reaction time and visual response latency in the superior colliculus. Experimental Brain Research 174, 1 (2006), 53–59.
[3]
Dipesh Bhattarai, Marwan Suheimat, Andrew J Lambert, and David A Atchison. 2019. Fixation stability with Bessel beams. Optometry and Vision Science 96, 2 (2019), 95–102.
[4]
Andy Cockburn, Carl Gutwin, Joey Scarr, and Sylvain Malacria. 2014. Supporting novice to expert transitions in user interfaces. ACM Computing Surveys (CSUR) 47, 2 (2014), 1–36.
[5]
H Deuble, W Wolf, and G Hauske. 1984. The evaluation of the oculomotor error signal. In Advances in Psychology. Vol. 22. Elsevier, 55–62.
[6]
Heiko Drewes and Albrecht Schmidt. 2007. Interacting with the computer using gaze gestures. In IFIP Conference on Human-Computer Interaction. Springer, 475–488.
[7]
Jay Henderson, Sylvain Malacria, Mathieu Nancel, and Edward Lank. 2020. Investigating the necessity of delay in marking menu invocation. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–13.
[8]
Anthony Hornof, Anna Cavender, and Rob Hoselton. 2003. Eyedraw: a system for drawing pictures with eye movements. ACM SIGACCESS Accessibility and Computing77-78 (2003), 86–93.
[9]
Anke Huckauf and Mario H Urbina. 2008. Gazing with pEYEs: towards a universal input for various applications. In Proceedings of the 2008 symposium on Eye tracking research & applications. 51–54.
[10]
Aulikki Hyrskykari, Howell Istance, and Stephen Vickers. 2012. Gaze gestures or dwell-based interaction?. In Proceedings of the Symposium on Eye Tracking Research and Applications. 229–232.
[11]
FOVE inc.2021. FOVE0 Headset Specification. https://fove-inc.com/product/, last visited Dec. 2021.
[12]
HTC inc.2021. HTC Vive Pro Eye Headset Specification. https://www.vive.com/kr/product/vive-pro-eye/specs/, last visited Dec. 2021.
[13]
Toshiya Isomoto, Shota Yamanaka, and Buntarou Shizuki. 2020. Gaze-based Command Activation Technique Robust Against Unintentional Activation using Dwell-then-Gesture. (2020).
[14]
Laurent Itti and Christof Koch. 2001. Computational modelling of visual attention. Nature reviews neuroscience 2, 3 (2001), 194–203.
[15]
RP Kalesnykas and PE Hallett. 1994. Retinal eccentricity and the latency of eye saccades. Vision research 34, 4 (1994), 517–531.
[16]
Yvonne Kammerer, Katharina Scheiter, and Wolfgang Beinhauer. 2008. Looking my way through the menu: the impact of menu design and multimodal input on gaze-based menu selection. In Proceedings of the 2008 Symposium on Eye Tracking Research & Applications. 213–220.
[17]
Gordon Kurtenbach and William Buxton. 1991. Issues in combining marking and direct manipulation techniques. In Proceedings of the 4th annual ACM symposium on User interface software and technology. 137–144.
[18]
Gordon Kurtenbach and William Buxton. 1994. User learning and performance with marking menus. In Proceedings of the SIGCHI conference on Human factors in computing systems. 258–264.
[19]
Gordon Paul Kurtenbach. 1993. The design and evaluation of marking menus.University of Toronto Toronto.
[20]
Olivier Le Meur and Zhi Liu. 2015. Saccadic model of eye movements for free-viewing condition. Vision research 116(2015), 152–164.
[21]
Päivi Majaranta, Ulla-Kaija Ahola, and Oleg Špakov. 2009. Fast gaze typing with an adjustable dwell time. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 357–360.
[22]
Päivi Majaranta, Jari Laitinen, Jari Kangas, and Poika Isokoski. 2019. Inducing gaze gestures by static illustrations. In Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications. 1–5.
[23]
Päivi Majaranta, Kari-Jouko Räihä, Aulikki Hyrskykari, and Oleg Špakov. 2019. Eye movements and human-computer interaction. In Eye Movement Research. Springer, 971–1015.
[24]
M v Menozzi, A v Buol, H Krueger, and Ch Miège. 1994. Direction of gaze and comfort: discovering the relation for the ergonomic optimization of visual tasks. Ophthalmic and Physiological Optics 14, 4 (1994), 393–399.
[25]
Marco Porta and Matteo Turina. 2008. Eye-S: a full-screen input modality for pure eye-based communication. In Proceedings of the 2008 symposium on Eye tracking research & applications. 27–34.
[26]
Vijay Rajanna and John Paulin Hansen. 2018. Gaze typing in virtual reality: impact of keyboard design, selection method, and motion. In Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications. 1–10.
[27]
Joey Scarr, Andy Cockburn, Carl Gutwin, and Philip Quinn. 2011. Dips and ceilings: understanding and supporting transitions to expertise in user interfaces. In Proceedings of the sigchi conference on human factors in computing systems. 2741–2750.
[28]
Immo Schuetz, T Scott Murdison, Kevin J MacKenzie, and Marina Zannoli. 2019. An Explanation of Fitts’ Law-like Performance in Gaze-Based Selection Tasks Using a Psychophysics Approach. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–13.
[29]
Ludwig Sidenmark and Hans Gellersen. 2019. Eye, head and torso coordination during gaze shifts in virtual reality. ACM Transactions on Computer-Human Interaction (TOCHI) 27, 1(2019), 1–40.
[30]
Ludwig Sidenmark and Hans Gellersen. 2019. Eye&head: Synergetic eye and head movement for gaze pointing and selection. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology. 1161–1174.
[31]
Lore Thaler, Alexander C Schütz, Melvyn A Goodale, and Karl R Gegenfurtner. 2013. What is the best fixation target? The effect of target shape on stability of fixational eye movements. Vision research 76(2013), 31–42.
[32]
Jan Theeuwes, Arthur F Kramer, Sowon Hahn, and David E Irwin. 1998. Our eyes do not always go where we want them to go: Capture of the eyes by new objects. Psychological Science 9, 5 (1998), 379–385.
[33]
Jan Theeuwes, Sebastiaan Mathôt, and Alan Kingstone. 2010. Object-based eye movements: The eyes prefer to stay within the same object. Attention, Perception, & Psychophysics 72, 3 (2010), 597–601.
[34]
Geoffrey Tien and M Stella Atkins. 2008. Improving hands-free menu selection using eyegaze glances and fixations. In Proceedings of the 2008 symposium on Eye tracking research & applications. 47–50.
[35]
Mario H Urbina, Maike Lorenz, and Anke Huckauf. 2010. Pies with EYEs: the limits of hierarchical pie menus in gaze control. In Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications. 93–96.
[36]
Jacob O Wobbrock, Leah Findlater, Darren Gergle, and James J Higgins. 2011. The aligned rank transform for nonparametric factorial analyses using only anova procedures. In Proceedings of the SIGCHI conference on human factors in computing systems. 143–146.
[37]
Jacob O Wobbrock, James Rubinstein, Michael W Sawyer, and Andrew T Duchowski. 2008. Longitudinal evaluation of discrete consecutive gaze gestures for text entry. In Proceedings of the 2008 symposium on Eye tracking research & applications. 11–18.
[38]
Steven Yantis and Anne P Hillstrom. 1994. Stimulus-driven attentional capture: evidence from equiluminant visual objects.Journal of experimental psychology: Human perception and performance 20, 1(1994), 95.

Cited By

View all
  • (2024)Exploring Gaze-Based Menu Navigation in Virtual EnvironmentsProceedings of the 2024 ACM Symposium on Spatial User Interaction10.1145/3677386.3688887(1-2)Online publication date: 7-Oct-2024
  • (2024)GestureMark: Shortcut Input Technique using Smartwatch Touch Gestures for XR GlassesProceedings of the Augmented Humans International Conference 202410.1145/3652920.3652941(63-71)Online publication date: 4-Apr-2024
  • (2024)Enhancing Fixation and Pursuit: Optimizing Field of View and Number of Targets for Selection Performance in Virtual RealityInternational Journal of Human–Computer Interaction10.1080/10447318.2024.231388841:2(1221-1233)Online publication date: 15-Feb-2024
  • Show More Cited By

Index Terms

  1. Lattice Menu: A Low-Error Gaze-Based Marking Menu Utilizing Target-Assisted Gaze Gestures on a Lattice of Visual Anchors

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CHI '22: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems
    April 2022
    10459 pages
    ISBN:9781450391573
    DOI:10.1145/3491102
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 29 April 2022

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. AR/VR
    2. Eye Tracking
    3. Gaze-Based Interaction
    4. Marking Menu

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Funding Sources

    • Institute for Information & Communications Technology Promotion

    Conference

    CHI '22
    Sponsor:
    CHI '22: CHI Conference on Human Factors in Computing Systems
    April 29 - May 5, 2022
    LA, New Orleans, USA

    Acceptance Rates

    Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

    Upcoming Conference

    CHI 2025
    ACM CHI Conference on Human Factors in Computing Systems
    April 26 - May 1, 2025
    Yokohama , Japan

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)208
    • Downloads (Last 6 weeks)22
    Reflects downloads up to 14 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Exploring Gaze-Based Menu Navigation in Virtual EnvironmentsProceedings of the 2024 ACM Symposium on Spatial User Interaction10.1145/3677386.3688887(1-2)Online publication date: 7-Oct-2024
    • (2024)GestureMark: Shortcut Input Technique using Smartwatch Touch Gestures for XR GlassesProceedings of the Augmented Humans International Conference 202410.1145/3652920.3652941(63-71)Online publication date: 4-Apr-2024
    • (2024)Enhancing Fixation and Pursuit: Optimizing Field of View and Number of Targets for Selection Performance in Virtual RealityInternational Journal of Human–Computer Interaction10.1080/10447318.2024.231388841:2(1221-1233)Online publication date: 15-Feb-2024
    • (2024)BIGazeAdvanced Engineering Informatics10.1016/j.aei.2023.10215958:COnline publication date: 4-Mar-2024
    • (2024)Selection in Stride: Comparing Button- and Head-Based Augmented Reality Interaction During LocomotionHCI International 2024 Posters10.1007/978-3-031-61950-2_3(22-32)Online publication date: 7-Jun-2024
    • (2022)Dwell Selection with ML-based Intent Prediction Using Only Gaze DataProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/35503016:3(1-21)Online publication date: 7-Sep-2022

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media