Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3536221.3556612acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
research-article

Two-Step Gaze Guidance

Published: 07 November 2022 Publication History

Abstract

One challenge of providing guidance for search tasks consists in guiding the user’s visual attention to certain objects in a potentially large search space. Previous work has tried to guide the user’s attention by providing visual, audio, or haptic cues. The state-of-the-art methods either provide hints pointing towards the approximate direction of the target location for a fast but less accurate search or require the user to perform a fine-grained search from the beginning for a precise yet less efficient search. To combine the advantage of both methods, we propose an interaction concept called Two-Step Gaze Guidance. The first-step guidance focuses on quick guidance toward the approximate direction, and the second-step guidance focuses on fine-grained guidance toward the exact location of the target. A between-subject study (N = 69) with five conditions was carried out to compare the two-step gaze guidance method with the single-step gaze guidance method. Results revealed that the proposed method outperformed the single-step gaze guidance method. More precisely, the introduction of Two-Step Gaze Guidance slightly improves the searching accuracy, and the use of spatial audio as the first-step guidance significantly helps in enhancing the searching efficiency. Our results also indicated several design suggestions for designing gaze guidance methods.

References

[1]
Robert Albrecht, Riitta Väänänen, and Tapio Lokki. 2016. Guided by Music: Pedestrian and Cyclist Navigation with Route and Beacon Guidance. Personal and Ubiquitous Computing 20, 1 (2016), 121–145. https://doi.org/10.1007/s00779-016-0906-z
[2]
Oscar J. Ariza N., Markus Lange, Frank Steinicke, and Gerd Bruder. 2017. Vibrotactile Assistance for User Guidance Towards Selection Targets in VR and the Cognitive Resources Involved. In 2017 IEEE Symposium on 3D User Interfaces (3DUI). 95–98. https://doi.org/10.1109/3DUI.2017.7893323
[3]
Andreas Bulling, Andrew T. Duchowski, and Päivi Majaranta. 2011. PETMEI 2011: The 1st International Workshop on Pervasive Eye Tracking and Mobile Eye-Based Interaction. In Proceedings of the 13th International Conference on Ubiquitous Computing (Beijing, China) (UbiComp ’11). Association for Computing Machinery, New York, NY, USA, 627–628. https://doi.org/10.1145/2030112.2030248
[4]
Stefano Burigat and Luca Chittaro. 2007. Navigation in 3D Virtual Environments: Effects of User Experience and Location-Pointing Navigation Aids. International Journal of Human-Computer Studies 65, 11 (2007), 945–958. https://doi.org/10.1016/j.ijhcs.2007.07.003
[5]
Victor Adriel de Jesus Oliveira, Luciana Nedel, Anderson Maciel, and Luca Brayda. 2016. Localized Magnification in Vibrotactile HMDs for Accurate Spatial Awareness. In Haptics: Perception, Devices, Control, and Applications, Fernando Bello, Hiroyuki Kajimoto, and Yon Visell (Eds.). Springer International Publishing, 55–64. https://doi.org/10.1007/978-3-319-42324-1_6
[6]
Emily Fujimoto and Matthew Turk. 2014. Non-Visual Navigation Using Combined Audio Music and Haptic Cues. In ICMI 2014 - Proceedings of the 2014 International Conference on Multimodal Interaction. 411–418. https://doi.org/10.1145/2663204.2663243
[7]
S. Garrido-Jurado, R. Muñoz-Salinas, F.J. Madrid-Cuevas, and R. Medina-Carnicer. 2016. Generation of Fiducial Marker Dictionaries Using Mixed Integer Linear Programming. Pattern Recognition 51(2016), 481–491. https://doi.org/10.1016/j.patcog.2015.09.023
[8]
Steve Grogorick, Georgia Albuquerque, Jan-Philipp Tauscher, and Marcus Magnor. 2018. Comparison of Unobtrusive Visual Guidance Methods in an Immersive Dome Environment. ACM Trans. Appl. Percept. 15, 4, Article 27 (sep 2018), 11 pages. https://doi.org/10.1145/3238303
[9]
Steve Grogorick, Michael Stengel, Elmar Eisemann, and Marcus Magnor. 2017. Subtle Gaze Guidance for Immersive Environments. In Proceedings of the ACM Symposium on Applied Perception (Cottbus, Germany) (SAP ’17). Association for Computing Machinery, New York, NY, USA, Article 4, 7 pages. https://doi.org/10.1145/3119881.3119890
[10]
Uwe Gruenefeld, Abdallah El Ali, Susanne Boll, and Wilko Heuten. 2018. Beyond Halo and Wedge: Visualizing out-of-View Objects on Head-Mounted Virtual and Augmented Reality Devices. In Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services (Barcelona, Spain) (MobileHCI ’18). Association for Computing Machinery, New York, NY, USA, Article 40, 11 pages. https://doi.org/10.1145/3229434.3229438
[11]
Jonna Häkkilä, Ashley Colley, Keith Cheverst, Simon Robinson, Johannes Schöning, Nicola J Bidwell, and Felix Kosmalla. 2017. NatureCHI 2017: The 2Nd Workshop on Unobtrusive User Experiences with Technology in Nature. In Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services(MobileHCI ’17). ACM, New York, NY, USA, 77:1—-77:4. https://doi.org/10.1145/3098279.3119836
[12]
Wenkai Han and Hans-Jörg Schulz. 2020. Exploring Vibrotactile Cues for Interactive Guidance in Data Visualization. In Proceedings of the 13th International Symposium on Visual Information Communication and Interaction (Eindhoven, Netherlands) (VINCI ’20). Association for Computing Machinery, New York, NY, USA, Article 14, 10 pages. https://doi.org/10.1145/3430036.3430042
[13]
Roberta L. Klatzky, James R. Marston, Nicholas A. Giudice, Reginald G. Golledge, and Jack M. Loomis. 2006. Cognitive Load of Navigating Without Vision When Guided by Virtual Sound Versus Spatial Language.Journal of experimental psychology. Applied 12, 4 (2006), 223–32. https://doi.org/10.1037/1076-898X.12.4.223
[14]
Tiffany C.K. Kwok, Peter Kiefer, Victor R. Schinazi, Benjamin Adams, and Martin Raubal. 2019. Gaze-Guided Narratives: Adapting Audio Guide Content to Gaze in Virtual and Real Environments. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19). ACM, New York, NY, USA, Article 491, 12 pages. https://doi.org/10.1145/3290605.3300721
[15]
Daniel Lange, Tim Claudius Stratmann, Uwe Gruenefeld, and Susanne Boll. 2020. HiveFive: Immersion Preserving Attention Guidance in Virtual Reality. Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3313831.3376803
[16]
Michael Lankes and Andreas Haslinger. 2019. Lost & Found: Gaze-Based Player Guidance Feedback in Exploration Games. In Extended Abstracts of the Annual Symposium on Computer-Human Interaction in Play Companion Extended Abstracts (Barcelona, Spain) (CHI PLAY ’19 Extended Abstracts). Association for Computing Machinery, New York, NY, USA, 483–492. https://doi.org/10.1145/3341215.3356275
[17]
Mats Liljedahl and Stefan Lindberg. 2011. Sound Parameters for Expressing Geographic Distance in a Mobile Navigation Application. In Proceedings of the 6th Audio Mostly Conference: A Conference on Interaction with Sound(Coimbra, Portugal) (AM ’11). Association for Computing Machinery, New York, NY, USA, 1–7. https://doi.org/10.1145/2095667.2095668
[18]
Viktor Losing, Lukas Rottkamp, Michael Zeunert, and Thies Pfeiffer. 2014. Guiding Visual Search Tasks Using Gaze-Contingent Auditory Feedback(UbiComp ’14 Adjunct). Association for Computing Machinery, New York, NY, USA, 1093–1102. https://doi.org/10.1145/2638728.2641687
[19]
Akira Matsuda, Kazunori Nozawa, Kazuki Takata, Atsushi Izumihara, and Jun Rekimoto. 2020. HapticPointer: A Neck-Worn Device That Presents Direction by Vibrotactile Feedback for Remote Collaboration Tasks. In Proceedings of the Augmented Humans International Conference (Kaiserslautern, Germany) (AHs ’20). Association for Computing Machinery, New York, NY, USA, Article 7, 10 pages. https://doi.org/10.1145/3384657.3384777
[20]
Fumihiko Nakamura, Adrien Verhulst, Kuniharu Sakurada, Masaaki Fukuoka, and Maki Sugimoto. 2021. Virtual Whiskers: Cheek Haptic-Based Spatial Directional Guidance in a Virtual Space. In SIGGRAPH Asia 2021 XR (Tokyo, Japan) (SA ’21 XR). Association for Computing Machinery, New York, NY, USA, Article 17, 2 pages. https://doi.org/10.1145/3478514.3487625
[21]
Roshan Lalintha Peiris, Wei Peng, Zikun Chen, and Kouta Minamizawa. 2017. Exploration of Cuing Methods for Localization of Spatial Cues Using Thermal Haptic Feedback on the Forehead. In 2017 IEEE World Haptics Conference (WHC). 400–405. https://doi.org/10.1109/WHC.2017.7989935
[22]
Martin Pielot, Benjamin Poppinga, and Susanne Boll. 2010. PocketNavigator: Vibro-Tactile Waypoint Navigation for Everyday Mobile Devices. In Proceedings of the 12th International Conference on Human Computer Interaction with Mobile Devices and Services (Lisbon, Portugal) (MobileHCI ’10). Association for Computing Machinery, New York, NY, USA, 423–426. https://doi.org/10.1145/1851600.1851696
[23]
Martin Pielot, Benjamin Poppinga, Wilko Heuten, and Susanne Boll. 2011. A Tactile Compass for Eyes-Free Pedestrian Navigation. In Human-Computer Interaction – INTERACT 2011, Pedro Campos, Nicholas Graham, Joaquim Jorge, Nuno Nunes, Philippe Palanque, and Marco Winckler (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 640–656.
[24]
Patrick Renner and Thies Pfeiffer. 2017. Attention Guiding Techniques Using Peripheral Vision and Eye Tracking for Feedback in Augmented-Reality-Based Assistance Systems. In 2017 IEEE Symposium on 3D User Interfaces (3DUI). 186–194. https://doi.org/10.1109/3DUI.2017.7893338
[25]
Francisco J. Romero-Ramirez, Rafael Muñoz-Salinas, and Rafael Medina-Carnicer. 2018. Speeded Up Detection of Squared Fiducial Markers. Image and Vision Computing 76 (2018), 38–47. https://doi.org/10.1016/j.imavis.2018.05.004
[26]
Thiago Santini, Hanna Brinkmann, Luise Reitstätter, Helmut Leder, Raphael Rosenberg, Wolfgang Rosenstiel, and Enkelejda Kasneci. 2018. The Art of Pervasive Eye Tracking: Unconstrained Eye Tracking in the Austrian Gallery Belvedere. In Proceedings of the 7th Workshop on Pervasive Eye Tracking and Mobile Eye-Based Interaction (Warsaw, Poland) (PETMEI ’18). Association for Computing Machinery, New York, NY, USA, Article 5, 8 pages. https://doi.org/10.1145/3208031.3208032
[27]
Hans Strasburger, Ingo Rentschler, and Martin Jüttner. 2011. Peripheral Vision and Pattern Recognition: A Review. Journal of Vision 11, 5 (12 2011), 13–13. https://doi.org/10.1167/11.5.13 arXiv:https://arvojournals.org/arvo/content_public/journal/jov/933487/jov-11-5-13.pdf
[28]
Tim Claudius Stratmann, Andreas Löcken, Uwe Gruenefeld, Wilko Heuten, and Susanne Boll. 2018. Exploring Vibrotactile and Peripheral Cues for Spatial Attention Guidance. In Proceedings of the 7th ACM International Symposium on Pervasive Displays (Munich, Germany) (PerDis ’18). Association for Computing Machinery, New York, NY, USA, Article 9, 8 pages. https://doi.org/10.1145/3205873.3205874
[29]
Chip Tonkin, Andrew T. Duchowski, Joshua Kahue, Paul Schiffgens, and Frank Rischner. 2011. Eye Tracking over Small and Large Shopping Displays. In Proceedings of the 1st International Workshop on Pervasive Eye Tracking & Mobile Eye-Based Interaction(Beijing, China) (PETMEI ’11). Association for Computing Machinery, New York, NY, USA, 49–52. https://doi.org/10.1145/2029956.2029970
[30]
Tuyen V. Tran, Tomasz Letowski, and Kim S. Abouchacra. 2000. Evaluation of Acoustic Beacon Characteristics for Navigation Tasks. Ergonomics 43, 6 (2000), 807–827. https://doi.org/10.1080/001401300404760
[31]
Wikipedia. 2022. Peripheral vision. https://en.wikipedia.org/wiki/Peripheral_vision (Date accessed: 10.05.2022).
[32]
Jing Yang and Cheuk Yu Chan. 2019. Audio-Augmented Museum Experiences with Gaze Tracking. In Proceedings of the 18th International Conference on Mobile and Ubiquitous Multimedia (Pisa, Italy) (MUM ’19). Association for Computing Machinery, New York, NY, USA, Article 46, 5 pages. https://doi.org/10.1145/3365610.3368415

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
ICMI '22: Proceedings of the 2022 International Conference on Multimodal Interaction
November 2022
830 pages
ISBN:9781450393904
DOI:10.1145/3536221
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 07 November 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. audio feedback
  2. gaze-guidance
  3. haptic feedback
  4. non-visual guidance

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

ICMI '22
Sponsor:

Acceptance Rates

Overall Acceptance Rate 453 of 1,080 submissions, 42%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 167
    Total Downloads
  • Downloads (Last 12 months)43
  • Downloads (Last 6 weeks)3
Reflects downloads up to 23 Nov 2024

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media