Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/2818346.2820751acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
research-article

Look & Pedal: Hands-free Navigation in Zoomable Information Spaces through Gaze-supported Foot Input

Published: 09 November 2015 Publication History

Abstract

For a desktop computer, we investigate how to enhance conventional mouse and keyboard interaction by combining the input modalities gaze and foot. This multimodal approach offers the potential for fluently performing both manual input (e.g., for precise object selection) and gaze-supported foot input (for pan and zoom) in zoomable information spaces in quick succession or even in parallel. For this, we take advantage of fast gaze input to implicitly indicate where to navigate to and additional explicit foot input for speed control while leaving the hands free for further manual input. This allows for taking advantage of gaze input in a subtle and unobtrusive way. We have carefully elaborated and investigated three variants of foot controls incorporating one-, two- and multidirectional foot pedals in combination with gaze. These were evaluated and compared to mouse-only input in a user study using Google Earth as a geographic information system. The results suggest that gaze-supported foot input is feasible for convenient, user-friendly navigation and comparable to mouse input and encourage further investigations of gaze-supported foot controls.

Supplementary Material

ZIP File (icmi1356.zip)
Supplemental movie file: The video shows the gaze-supported foot navigation techniques as described and evaluated in the paper.

References

[1]
N. Adams, M. Witkowski, and R. Spence. The inspection of very large images by eye-gaze control. In Proc. of AVI'08, pages 111--118. ACM, 2008.
[2]
J. Alexander, T. Han, W. Judd, P. Irani, and S. Subramanian. Putting your best foot forward: investigating real-world mappings for foot-based gestures. In Proc. of CHI'12, pages 1229--1238. ACM, 2012.
[3]
M. Ashmore, A. T. Duchowski, and G. Shoemaker. Efficient eye pointing with a fisheye lens. In Proc. of GI '05, pages 203--210, 2005.
[4]
R. Bates, H. Istance, M. Donegan, and L. Oosthuizen. Fly where you look: Enhancing gaze based interaction in 3D environments. In Proc. of COGAIN'05, pages 30--32, 2005.
[5]
S. Beckhaus, K. Blom, and M. Haringer. Intuitive, hands-free travel interfaces for virtual environments. In Proc. of IEEE VR'05, Workshop --New directions in 3D User Interfaces', pages 57--60. Shaker Verlag, 3 2005.
[6]
A. Crossan, S. Brewster, and A. Ng. Foot tapping for mobile interaction. In Proc. of BCS'10, pages 418--422. British Computer Society, 2010.
[7]
F. Daiber, J. Schöning, and A. Krüger. Whole body interaction with geospatial data. In Smart Graphics, volume 5531 of Lect. Notes in CS, pages 81--92. Springer, 2009.
[8]
R. Darken, W. Cockayne, and D. Carmein. The omni-directional treadmill: a locomotion device for virtual worlds. In Proc. of UIST '97, pages 213--221. ACM, 1997.
[9]
G. de Haan, E. J. Griffith, and F. H. Post. Using the wii balance board as a low-cost vr interaction device. In Proc. of VRST '08, pages 289--290. ACM, 2008.
[10]
G. Furnas and B. Bederson. Space-scale diagrams: Understanding multiscale interfaces. In Proc. of CHI '95, pages 234--241. ACM Press, 1995.
[11]
F. Göbel, K. Klamka, A. Siegel, S. Vogt, S. Stellmach, and R. Dachselt. Gaze-supported foot interaction in zoomable information spaces. In Proc. of CHI EA '13, pages 3059--3062, New York, NY, USA, 2013. ACM.
[12]
D. W. Hansen, H. H. T. Skovsgaard, J. P. Hansen, and E. Møllenbach. Noise tolerant selection by gaze-controlled pan and zoom in 3d. In Proc. of ETRA '08, pages 205--212. ACM, 2008.
[13]
S. G. Hart. Nasa-task load index (nasa-tlx); 20 years later. In Proc. of the HFES Annual Meeting, volume 50, pages 904--908. Sage Publications, 2006.
[14]
E. R. Hoffmann. A comparison of hand and foot movement times. Ergonomics, 34(4):397--406, 1991.
[15]
R. Jacob. The use of eye movements in human-computer interaction techniques: what you look at is what you get. ACM TOIS, 9(2):152--169, 1991.
[16]
C. Lankford. Effective eye-gaze input into windows. In Proc. ETRA '00, pages 23--27. ACM, 2000.
[17]
J. J. LaViola, Jr., D. A. Feliz, D. F. Keefe, and R. C. Zeleznik. Hands-free multi-scale navigation in virtual environments. In Proc. of I3D '01, pages 9--15. ACM, 2001.
[18]
T. Pakkanen and R. Raisamo. Appropriateness of foot interaction for non-accurate spatial tasks. In Proc. of CHI EA '04, pages 1123--1126. ACM, 2004.
[19]
G. Pearson and M. Weiser. Of moles and men: the design of foot controls for workstations. SIGCHI Bull., 17(4):333--339, Apr. 1986.
[20]
J. Schöning, F. Daiber, A. Krüger, and M. Rohs. Using hands and feet to navigate and manipulate spatial data. In CHI EA '09, pages 4663--4668. ACM, 2009.
[21]
J. Scott, D. Dearman, K. Yatani, and K. N. Truong. Sensing foot gestures from the pocket. In Proc. of UIST '10, pages 199--208. ACM, 2010.
[22]
M. Slater, A. Steed, and M. Usoh. The virtual treadmill: A naturalistic metaphor for navigation in immersive virtual environments. In Virtual Environments '95, Eurographics, pages 135--148. Springer Vienna, 1995.
[23]
S. Stellmach and R. Dachselt. Investigating gaze-supported multimodal pan and zoom. In Proc. of ETRA '12, pages 357--360. ACM, 2012.
[24]
S. Stellmach and R. Dachselt. Still looking: Investigating seamless gaze-supported selection, positioning, and manipulation of distant targets. In Proc. of CHI '13, pages 285--294. ACM, 2013.
[25]
S. Stellmach, S. Stober, A. Nürnberger, and R. Dachselt. Designing gaze-supported multimodal interactions for the exploration of large image collections. In Proc. of NGCA '11, pages 1--8. ACM, 2011.
[26]
C. Ware and H. H. Mikaelian. An evaluation of an eye tracker as a device for computer input. In Proc. of SIGCHI+GI '87, pages 183--188, 4 1987.
[27]
X. Zhang, X. Ren, and H. Zha. Improving eye cursor's stability for eye pointing tasks. In Proc. of CHI '08, pages 525--534. ACM, 2008.
[28]
D. Zhu, T. Gedeon, and K. Taylor. Moving to the centre: A gaze-driven remote camera control for teleoperation. Interact. Comput., 23:85--95, 1 2011.

Cited By

View all
  • (2024)Exploration of Foot-based Text Entry Techniques for Virtual Reality EnvironmentsProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642757(1-17)Online publication date: 11-May-2024
  • (2024)Towards an Eye-Brain-Computer Interface: Combining Gaze with the Stimulus-Preceding Negativity for Target Selections in XRProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3641925(1-17)Online publication date: 11-May-2024
  • (2024)Evaluating the performance of gaze interaction for map target selectionCartography and Geographic Information Science10.1080/15230406.2024.2335331(1-21)Online publication date: 9-Apr-2024
  • Show More Cited By

Index Terms

  1. Look & Pedal: Hands-free Navigation in Zoomable Information Spaces through Gaze-supported Foot Input

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    ICMI '15: Proceedings of the 2015 ACM on International Conference on Multimodal Interaction
    November 2015
    678 pages
    ISBN:9781450339124
    DOI:10.1145/2818346
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 09 November 2015

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. eye tracking
    2. foot input
    3. gaze input
    4. gaze-supported interaction
    5. multimodal interaction
    6. navigation
    7. pan
    8. zoom

    Qualifiers

    • Research-article

    Conference

    ICMI '15
    Sponsor:
    ICMI '15: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION
    November 9 - 13, 2015
    Washington, Seattle, USA

    Acceptance Rates

    ICMI '15 Paper Acceptance Rate 52 of 127 submissions, 41%;
    Overall Acceptance Rate 453 of 1,080 submissions, 42%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)60
    • Downloads (Last 6 weeks)4
    Reflects downloads up to 20 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Exploration of Foot-based Text Entry Techniques for Virtual Reality EnvironmentsProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642757(1-17)Online publication date: 11-May-2024
    • (2024)Towards an Eye-Brain-Computer Interface: Combining Gaze with the Stimulus-Preceding Negativity for Target Selections in XRProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3641925(1-17)Online publication date: 11-May-2024
    • (2024)Evaluating the performance of gaze interaction for map target selectionCartography and Geographic Information Science10.1080/15230406.2024.2335331(1-21)Online publication date: 9-Apr-2024
    • (2023)PalmGazer: Unimanual Eye-hand Menus in Augmented RealityProceedings of the 2023 ACM Symposium on Spatial User Interaction10.1145/3607822.3614523(1-12)Online publication date: 13-Oct-2023
    • (2023)Predicting Gaze-based Target Selection in Augmented Reality Headsets based on Eye and Head Endpoint DistributionsProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3581042(1-14)Online publication date: 19-Apr-2023
    • (2023)TicTacToes: Assessing Toe Movements as an Input ModalityProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3580954(1-17)Online publication date: 19-Apr-2023
    • (2023)DataDancing: An Exploration of the Design Space For Visualisation View Management for 3D Surfaces and SpacesProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3580827(1-17)Online publication date: 19-Apr-2023
    • (2023)Discussing Facets of Hybrid User Interfaces for the Medical Domain2023 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)10.1109/ISMAR-Adjunct60411.2023.00059(257-260)Online publication date: 16-Oct-2023
    • (2023)Desiderata for Intelligent Maps: A Multiperspective CompilationDesiderata für intelligente Karten: Eine multiperspektivische ZusammenstellungKN - Journal of Cartography and Geographic Information10.1007/s42489-023-00142-w73:3(183-198)Online publication date: 17-Jun-2023
    • (2023)Comparing alternative modalities in the context of multimodal human–robot interactionJournal on Multimodal User Interfaces10.1007/s12193-023-00421-w18:1(69-85)Online publication date: 19-Oct-2023
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media