Nothing Special   »   [go: up one dir, main page]

Skip to main content

FGFlick: Augmenting Single-Finger Input Vocabulary for Smartphones with Simultaneous Finger and Gaze Flicks

  • Conference paper
  • First Online:
Human-Computer Interaction – INTERACT 2021 (INTERACT 2021)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 12936))

Included in the following conference series:

Abstract

FGFlick is an interactive technique featuring simultaneous single-finger operation and a gaze. The user flicks a smartphone and moves their gaze linearly. FGFlick thus augments the single-finger input vocabulary. As a result of the evaluation of the FGFlick gestures, we achieved success rates of 84.0%.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://developer.apple.com/documentation/arkit.

References

  1. Bergstrom-Lehtovirta, J., Oulasvirta, A.: Modeling the functional area of the thumb on mobile touchscreen surfaces. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 2014, pp. 1991–2000. Association for Computing Machinery, New York (2014). https://doi.org/10.1145/2556288.2557354

  2. Elleuch, H., Wali, A., Samet, A., Alimi, A.M.: Interacting with mobile devices by fusion eye and hand gestures recognition systems based on decision tree approach. In: Verikas, A., Radeva, P., Nikolaev, D.P., Zhang, W., Zhou, J. (eds.) Ninth International Conference on Machine Vision (ICMV 2016), vol. 10341, pp. 62–66. International Society for Optics and Photonics, SPIE (2017). https://doi.org/10.1117/12.2269010

  3. Nagamatsu, T., Yamamoto, M., Sato, H.: MobiGaze: development of a gaze interface for handheld mobile devices. In: CHI 2010 Extended Abstracts on Human Factors in Computing Systems, CHI EA 2010, pp. 3349–3354. Association for Computing Machinery, New York (2010). https://doi.org/10.1145/1753846.1753983

  4. Ng, A., Brewster, S.A., Williamson, J.H.: Investigating the effects of encumbrance on one- and two- handed interactions with mobile devices. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 2014, pp. 1981–1990. Association for Computing Machinery, New York (2014). https://doi.org/10.1145/2556288.2557312

  5. Pfeuffer, K., Alexander, J., Chong, M.K., Gellersen, H.: Gaze-touch: combining gaze with multi-touch for interaction on the same surface. In: Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology, UIST 2014, pp. 509–518. Association for Computing Machinery, New York (2014). https://doi.org/10.1145/2642918.2647397

  6. Pfeuffer, K., Alexander, J., Chong, M.K., Zhang, Y., Gellersen, H.: Gaze-shifting: direct-indirect input with pen and touch modulated by gaze. In: Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology, UIST 2015, pp. 373–383. Association for Computing Machinery, New York (2015). https://doi.org/10.1145/2807442.2807460

  7. Pfeuffer, K., Gellersen, H.: Gaze and touch interaction on tablets. In: Proceedings of the 29th Annual Symposium on User Interface Software and Technology, UIST 2016, pp. 301–311. Association for Computing Machinery, New York (2016). https://doi.org/10.1145/2984511.2984514

  8. Rivu, S., Abdrabou, Y., Mayer, T., Pfeuffer, K., Alt, F.: GazeButton: enhancing buttons with eye gaze interactions. In: Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications, ETRA 2019, pp. 1–7. Association for Computing Machinery, New York (2019). https://doi.org/10.1145/3317956.3318154

  9. Wang, B., Grossman, T.: BlyncSync: enabling multimodal smartwatch gestures with synchronous touch and blink. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI 2020, pp. 1–14. Association for Computing Machinery, New York (2020). https://doi.org/10.1145/3313831.3376132

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuki Yamato .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 IFIP International Federation for Information Processing

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Yamato, Y., Suzuki, Y., Takahashi, S. (2021). FGFlick: Augmenting Single-Finger Input Vocabulary for Smartphones with Simultaneous Finger and Gaze Flicks. In: Ardito, C., et al. Human-Computer Interaction – INTERACT 2021. INTERACT 2021. Lecture Notes in Computer Science(), vol 12936. Springer, Cham. https://doi.org/10.1007/978-3-030-85607-6_50

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-85607-6_50

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-85606-9

  • Online ISBN: 978-3-030-85607-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics