Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3544548.3580871acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

Comparing Dwell time, Pursuits and Gaze Gestures for Gaze Interaction on Handheld Mobile Devices

Published: 19 April 2023 Publication History

Abstract

Gaze is promising for hands-free interaction on mobile devices. However, it is not clear how gaze interaction methods compare to each other in mobile settings. This paper presents the first experiment in a mobile setting that compares three of the most commonly used gaze interaction methods: Dwell time, Pursuits, and Gaze gestures. In our study, 24 participants selected one of 2, 4, 9, 12 and 32 targets via gaze while sitting and while walking. Results show that input using Pursuits is faster than Dwell time and Gaze gestures especially when there are many targets. Users prefer Pursuits when stationary, but prefer Dwell time when walking. While selection using Gaze gestures is more demanding and slower when there are many targets, it is suitable for contexts where accuracy is more important than speed. We conclude with guidelines for the design of gaze interaction on handheld mobile devices.

Supplementary Material

MP4 File (3544548.3580871-video-preview.mp4)
Video Preview
MP4 File (3544548.3580871-talk-video.mp4)
Pre-recorded Video Presentation

References

[1]
2022. Human Interface Guideline - Apple. Webpage. https://developer.apple.com/design/human-interface-guidelines/foundations/app-icons/ accessed 15 September 2022.
[2]
2022. Look to Speak. Webpage. https://experiments.withgoogle.com/looktospeak accessed 13 September 2022.
[3]
Yasmeen Abdrabou, Mohamed Khamis, Rana Mohamed Eisa, Sherif Ismail, and Amrl Elmougy. 2019. Just Gaze and Wave: Exploring the Use of Gaze and Gestures for Shoulder-Surfing Resilient Authentication. In Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications (Denver, Colorado) (ETRA ’19). Association for Computing Machinery, New York, NY, USA, Article 29, 10 pages. https://doi.org/10.1145/3314111.3319837
[4]
Yasmeen Abdrabou, Ken Pfeuffer, Mohamed Khamis, and Florian Alt. 2020. GazeLockPatterns: Comparing Authentication Using Gaze and Touch for Entering Lock Patterns. In ACM Symposium on Eye Tracking Research and Applications (Stuttgart, Germany) (ETRA ’20 Short Papers). Association for Computing Machinery, New York, NY, USA, Article 29, 6 pages. https://doi.org/10.1145/3379156.3391371
[5]
Yasmeen Abdrabou, Johannes Schütte, Ahmed Shams, Ken Pfeuffer, Daniel Buschek, Mohamed Khamis, and Florian Alt. 2022. ”Your Eyes Tell You Have Used This Password Before”: Identifying Password Reuse from Gaze and Keystroke Dynamics. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 400, 16 pages. https://doi.org/10.1145/3491102.3517531
[6]
Yasmeen Abdrabou, Ahmed Shams, Mohamed Omar Mantawy, Anam Ahmad Khan, Mohamed Khamis, Florian Alt, and Yomna Abdelrahman. 2021. GazeMeter: Exploring the Usage of Gaze Behaviour to Enhance Password Assessments. In ACM Symposium on Eye Tracking Research and Applications (Virtual Event, Germany) (ETRA ’21 Full Papers). Association for Computing Machinery, New York, NY, USA, Article 9, 12 pages. https://doi.org/10.1145/3448017.3457384
[7]
Sunggeun Ahn, Jeongmin Son, Sangyoon Lee, and Geehyuk Lee. 2020. Verge-It: Gaze Interaction for a Binocular Head-Worn Display Using Modulated Disparity Vergence Eye Movement. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI EA ’20). Association for Computing Machinery, New York, NY, USA, 1–7. https://doi.org/10.1145/3334480.3382908
[8]
Florian Alt, Andreas Bulling, Gino Gravanis, and Daniel Buschek. 2015. GravitySpot: Guiding Users in Front of Public Displays Using On-Screen Visual Cues. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology (Charlotte, NC, USA) (UIST ’15). Association for Computing Machinery, New York, NY, USA, 47–56. https://doi.org/10.1145/2807442.2807490
[9]
Mihai Bâce, Alia Saad, Mohamed Khamis, Stefan Schneegass, and Andreas Bulling. 2022. PrivacyScout: Assessing Vulnerability to Shoulder Surfing on Mobile Devices. Proc. Priv. Enhancing Technol. 2022, 3 (2022), 650–669. https://doi.org/10.56553/popets-2022-0090
[10]
Michael Barz, Florian Daiber, Daniel Sonntag, and Andreas Bulling. 2018. Error-Aware Gaze-Based Interfaces for Robust Mobile Gaze Interaction. In Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications (Warsaw, Poland) (ETRA ’18). Association for Computing Machinery, New York, NY, USA, Article 24, 10 pages. https://doi.org/10.1145/3204493.3204536
[11]
Mathieu Beraneck, François M. Lambert, and Soroush G. Sadeghi. 2014. Chapter 15 - Functional Development of the Vestibular System: Sensorimotor Pathways for Stabilization of Gaze and Posture. In Development of Auditory and Vestibular Systems, Raymond Romand and Isabel Varela-Nieto (Eds.). Academic Press, San Diego, 449–487. https://doi.org/10.1016/B978-0-12-408088-1.00015-4
[12]
Pascal Bérard, Derek Bradley, Maurizio Nitti, Thabo Beeler, and Markus H Gross. 2014. High-quality capture of eyes.ACM Trans. Graph. 33, 6 (2014), 223–1.
[13]
Andreas Bulling. 2016. Pervasive attentive user interfaces. Computer 49, 01 (2016), 94–98.
[14]
Andreas Bulling, Florian Alt, and Albrecht Schmidt. 2012. Increasing the Security of Gaze-Based Cued-Recall Graphical Passwords Using Saliency Masks. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Austin, Texas, USA) (CHI ’12). Association for Computing Machinery, New York, NY, USA, 3011–3020. https://doi.org/10.1145/2207676.2208712
[15]
Andreas Bulling and Hans Gellersen. 2010. Toward Mobile Eye-Based Human-Computer Interaction. IEEE Pervasive Computing 9, 4 (October 2010), 8–12. https://doi.org/10.1109/MPRV.2010.86
[16]
Andreas Bulling and Hans Gellersen. 2010. Toward Mobile Eye-Based Human-Computer Interaction. IEEE Pervasive Computing 9, 4 (2010), 8–12. https://doi.org/10.1109/MPRV.2010.86
[17]
Xiuli Chen, Aditya Acharya, Antti Oulasvirta, and Andrew Howes. 2021. An Adaptive Model of Gaze-Based Selection. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 288, 11 pages. https://doi.org/10.1145/3411764.3445177
[18]
Dietlind Helene Cymek, Antje Christine Venjakob, Stefan Ruff, Otto Hans-Martin Lutz, Simon Hofmann, and Matthias Roetting. 2014. Entering PIN codes by smooth pursuit eye movements. Journal of Eye Movement Research 7, 4 (May 2014). https://doi.org/10.16910/jemr.7.4.1
[19]
Murtaza Dhuliawala, Juyoung Lee, Junichi Shimizu, Andreas Bulling, Kai Kunze, Thad Starner, and Woontack Woo. 2016. Smooth Eye Movement Interaction Using EOG Glasses. In Proceedings of the 18th ACM International Conference on Multimodal Interaction (Tokyo, Japan) (ICMI ’16). Association for Computing Machinery, New York, NY, USA, 307–311. https://doi.org/10.1145/2993148.2993181
[20]
Connor Dickie, Roel Vertegaal, Changuk Sohn, and Daniel Cheng. 2005. EyeLook: Using Attention to Facilitate Mobile Media Consumption. In Proceedings of the 18th Annual ACM Symposium on User Interface Software and Technology (Seattle, WA, USA) (UIST ’05). Association for Computing Machinery, New York, NY, USA, 103–106. https://doi.org/10.1145/1095034.1095050
[21]
Heiko Drewes, Alexander De Luca, and Albrecht Schmidt. 2007. Eye-Gaze Interaction for Mobile Phones. In Proceedings of the 4th International Conference on Mobile Technology, Applications, and Systems and the 1st International Symposium on Computer Human Interaction in Mobile Technology (Singapore) (Mobility ’07). Association for Computing Machinery, New York, NY, USA, 364–371. https://doi.org/10.1145/1378063.1378122
[22]
Heiko Drewes, Mohamed Khamis, and Florian Alt. 2019. DialPlates: Enabling Pursuits-Based User Interfaces with Large Target Numbers. In Proceedings of the 18th International Conference on Mobile and Ubiquitous Multimedia (Pisa, Italy) (MUM ’19). Association for Computing Machinery, New York, NY, USA, Article 10, 10 pages. https://doi.org/10.1145/3365610.3365626
[23]
Heiko Drewes and Albrecht Schmidt. 2007. Interacting with the computer using gaze gestures. In Ifip conference on human-computer interaction. Springer, 475–488.
[24]
Morten Lund Dybdal, Javier San Agustin, and John Paulin Hansen. 2012. Gaze Input for Mobile Devices by Dwell and Gestures. In Proceedings of the Symposium on Eye Tracking Research and Applications (Santa Barbara, California) (ETRA ’12). Association for Computing Machinery, New York, NY, USA, 225–228. https://doi.org/10.1145/2168556.2168601
[25]
Hanene Elleuch, Ali Wali, and Adel M. Alimi. 2014. Smart Tablet Monitoring by a Real-Time Head Movement and Eye Gestures Recognition System. In 2014 International Conference on Future Internet of Things and Cloud. 393–398. https://doi.org/10.1109/FiCloud.2014.70
[26]
Hanene Elleuch, Ali Wali, Anis Samet, and Adel M Alimi. 2016. A real-time eye gesture recognition system based on fuzzy inference system for mobile devices monitoring. In International Conference on Advanced Concepts for Intelligent Vision Systems. Springer, 172–180.
[27]
Augusto Esteves, Yonghwan Shin, and Ian Oakley. 2020. Comparing selection mechanisms for gaze input techniques in head-mounted displays. International Journal of Human-Computer Studies 139 (2020), 102414.
[28]
Augusto Esteves, Eduardo Velloso, Andreas Bulling, and Hans Gellersen. 2015. Orbits: Gaze Interaction for Smart Watches Using Smooth Pursuit Eye Movements. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology(Charlotte, NC, USA) (UIST ’15). Association for Computing Machinery, New York, NY, USA, 457–466. https://doi.org/10.1145/2807442.2807499
[29]
Augusto Esteves, David Verweij, Liza Suraiya, Rasel Islam, Youryang Lee, and Ian Oakley. 2017. SmoothMoves: Smooth Pursuits Head Movements for Augmented Reality. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology (Québec City, QC, Canada) (UIST ’17). Association for Computing Machinery, New York, NY, USA, 167–178. https://doi.org/10.1145/3126594.3126616
[30]
Misahael Fernandez, Florian Mathis, and Mohamed Khamis. 2020. GazeWheels: Comparing Dwell-Time Feedback and Methods for Gaze Input. Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3419249.3420122
[31]
Denzil Ferreira, Jorge Goncalves, Vassilis Kostakos, Louise Barkhuus, and Anind K. Dey. 2014. Contextual Experience Sampling of Mobile Application Micro-Usage. In Proceedings of the 16th International Conference on Human-Computer Interaction with Mobile Devices & Services (Toronto, ON, Canada) (MobileHCI ’14). Association for Computing Machinery, New York, NY, USA, 91–100. https://doi.org/10.1145/2628363.2628367
[32]
Sandra G. Hart. 2006. Nasa-Task Load Index (NASA-TLX); 20 Years Later. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 50, 9(2006), 904–908. https://doi.org/10.1177/154193120605000909 arXiv:https://doi.org/10.1177/154193120605000909
[33]
Henna Heikkilä and Kari-Jouko Räihä. 2012. Simple Gaze Gestures and the Closure of the Eyes as an Interaction Technique. In Proceedings of the Symposium on Eye Tracking Research and Applications (Santa Barbara, California) (ETRA ’12). Association for Computing Machinery, New York, NY, USA, 147–154. https://doi.org/10.1145/2168556.2168579
[34]
Teresa Hirzle, Jan Gugenheimer, Florian Geiselhart, Andreas Bulling, and Enrico Rukzio. 2019. A Design Space for Gaze Interaction on Head-Mounted Displays. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3290605.3300855
[35]
Qiong Huang, Ashok Veeraraghavan, and Ashutosh Sabharwal. 2017. TabletGaze: dataset and analysis for unconstrained appearance-based gaze estimation in mobile tablets. Machine Vision and Applications 28, 5 (2017), 445–461.
[36]
Aulikki Hyrskykari, Howell Istance, and Stephen Vickers. 2012. Gaze Gestures or Dwell-Based Interaction?. In Proceedings of the Symposium on Eye Tracking Research and Applications (Santa Barbara, California) (ETRA ’12). Association for Computing Machinery, New York, NY, USA, 229–232. https://doi.org/10.1145/2168556.2168602
[37]
Howell Istance, Aulikki Hyrskykari, Lauri Immonen, Santtu Mansikkamaa, and Stephen Vickers. 2010. Designing Gaze Gestures for Gaming: An Investigation of Performance. In Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications (Austin, Texas) (ETRA ’10). Association for Computing Machinery, New York, NY, USA, 323–330. https://doi.org/10.1145/1743666.1743740
[38]
Robert J. K. Jacob. 1990. What You Look at is What You Get: Eye Movement-Based Interaction Techniques. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Seattle, Washington, USA) (CHI ’90). Association for Computing Machinery, New York, NY, USA, 11–18. https://doi.org/10.1145/97243.97246
[39]
Shahram Jalaliniya and Diako Mardanbegi. 2016. EyeGrip: Detecting Targets in a Series of Uni-Directional Moving Objects Using Optokinetic Nystagmus Eye Movements. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (San Jose, California, USA) (CHI ’16). Association for Computing Machinery, New York, NY, USA, 5801–5811. https://doi.org/10.1145/2858036.2858584
[40]
Zhiping Jiang, Jinsong Han, Chen Qian, Wei Xi, Kun Zhao, Han Ding, Shaojie Tang, Jizhong Zhao, and Panlong Yang. 2016. VADS: Visual attention detection with a smartphone. In IEEE INFOCOM 2016 - The 35th Annual IEEE International Conference on Computer Communications. 1–9. https://doi.org/10.1109/INFOCOM.2016.7524398
[41]
Jari Kangas, Deepak Akkil, Jussi Rantala, Poika Isokoski, Päivi Majaranta, and Roope Raisamo. 2014. Gaze Gestures and Haptic Feedback in Mobile Devices. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Toronto, Ontario, Canada) (CHI ’14). Association for Computing Machinery, New York, NY, USA, 435–438. https://doi.org/10.1145/2556288.2557040
[42]
Christina Katsini, Yasmeen Abdrabou, George Raptis, Mohamed Khamis, and Florian Alt. 2020. The Role of Eye Gaze in Security and Privacy Applications: Survey and Future HCI Research Directions. In Proceedings of the 38th Annual ACM Conference on Human Factors in Computing Systems (Honolulu, Hawaii, USA) (CHI ’20). ACM, New York, NY, USA, 21 pages. https://doi.org/10.1145/3313831.3376840
[43]
Mohamed Khamis, Florian Alt, and Andreas Bulling. 2018. The past, present, and future of gaze-enabled handheld mobile devices: Survey and lessons learned. In Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services. 1–17.
[44]
Mohamed Khamis, Florian Alt, Mariam Hassib, Emanuel von Zezschwitz, Regina Hasholzner, and Andreas Bulling. 2016. GazeTouchPass: Multimodal Authentication Using Gaze and Touch on Mobile Devices. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems (San Jose, California, USA) (CHI EA ’16). Association for Computing Machinery, New York, NY, USA, 2156–2164. https://doi.org/10.1145/2851581.2892314
[45]
Mohamed Khamis, Anita Baier, Niels Henze, Florian Alt, and Andreas Bulling. 2018. Understanding Face and Eye Visibility in Front-Facing Cameras of Smartphones Used in the Wild. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3173574.3173854
[46]
Mohamed Khamis, Mariam Hassib, Emanuel von Zezschwitz, Andreas Bulling, and Florian Alt. 2017. GazeTouchPIN: Protecting Sensitive Data on Mobile Devices Using Secure Multimodal Authentication. In Proceedings of the 19th ACM International Conference on Multimodal Interaction (Glasgow, UK) (ICMI 2017). ACM, New York, NY, USA, 446–450. https://doi.org/10.1145/3136755.3136809
[47]
Mohamed Khamis, Alexander Klimczak, Martin Reiss, Florian Alt, and Andreas Bulling. 2017. EyeScout: Active Eye Tracking for Position and Movement Independent Gaze Interaction with Large Public Displays. In Proceedings of the 30th Annual ACM Symposium on User Interface Software & Technology (Quebec City, QC, Canada) (UIST ’17). ACM, New York, NY, USA, 12 pages. https://doi.org/10.1145/3126594.3126630
[48]
Mohamed Khamis, Karola Marky, Andreas Bulling, and Florian Alt. 2022. User-centred multimodal authentication: securing handheld mobile devices using gaze and touch input. Behaviour & Information Technology 41, 10 (2022), 2047–2069. https://doi.org/10.1080/0144929X.2022.2069597 arXiv:https://doi.org/10.1080/0144929X.2022.2069597
[49]
Mohamed Khamis, Carl Oechsner, Florian Alt, and Andreas Bulling. 2018. VRpursuits: Interaction in Virtual Reality Using Smooth Pursuit Eye Movements. In Proceedings of the 2018 International Conference on Advanced Visual Interfaces(Castiglione della Pescaia, Grosseto, Italy) (AVI ’18). Association for Computing Machinery, New York, NY, USA, Article 18, 8 pages. https://doi.org/10.1145/3206505.3206522
[50]
Mohamed Khamis, Ozan Saltuk, Alina Hang, Katharina Stolz, Andreas Bulling, and Florian Alt. 2016. TextPursuits: Using Text for Pursuits-Based Interaction and Calibration on Public Displays. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing (Heidelberg, Germany) (UbiComp ’16). Association for Computing Machinery, New York, NY, USA, 274–285. https://doi.org/10.1145/2971648.2971679
[51]
Dominik Kirst and Andreas Bulling. 2016. On the Verge: Voluntary Convergences for Accurate and Precise Timing of Gaze Input. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems (San Jose, California, USA) (CHI EA ’16). Association for Computing Machinery, New York, NY, USA, 1519–1525. https://doi.org/10.1145/2851581.2892307
[52]
Andy Kong, Karan Ahuja, Mayank Goel, and Chris Harrison. 2021. EyeMU Interactions: Gaze+ IMU Gestures on Mobile Devices. In Proceedings of the 2021 International Conference on Multimodal Interaction. 577–585.
[53]
Shinya Kudo, Hiroyuki Okabe, Taku Hachisu, Michi Sato, Shogo Fukushima, and Hiroyuki Kajimoto. 2013. Input Method Using Divergence Eye Movement. In CHI ’13 Extended Abstracts on Human Factors in Computing Systems (Paris, France) (CHI EA ’13). Association for Computing Machinery, New York, NY, USA, 1335–1340. https://doi.org/10.1145/2468356.2468594
[54]
Manu Kumar, Tal Garfinkel, Dan Boneh, and Terry Winograd. 2007. Reducing Shoulder-Surfing by Using Gaze-Based Password Entry. In Proceedings of the 3rd Symposium on Usable Privacy and Security (Pittsburgh, Pennsylvania, USA) (SOUPS ’07). Association for Computing Machinery, New York, NY, USA, 13–19. https://doi.org/10.1145/1280680.1280683
[55]
Zhenjiang Li, Mo Li, Prasant Mohapatra, Jinsong Han, and Shuaiyu Chen. 2017. iType: Using eye gaze to enhance typing privacy. In IEEE INFOCOM 2017 - IEEE Conference on Computer Communications. 1–9. https://doi.org/10.1109/INFOCOM.2017.8057233
[56]
Päivi Majaranta, Ulla-Kaija Ahola, and Oleg Špakov. 2009. Fast Gaze Typing with an Adjustable Dwell Time. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Boston, MA, USA) (CHI ’09). Association for Computing Machinery, New York, NY, USA, 357–360. https://doi.org/10.1145/1518701.1518758
[57]
Päivi Majaranta and Andreas Bulling. 2014. Eye tracking and eye-based human–computer interaction. In Advances in physiological computing. Springer, 39–65.
[58]
Diako Mardanbegi, Tobias Langlotz, and Hans Gellersen. 2019. Resolving Target Ambiguity in 3D Gaze Interaction through VOR Depth Estimation. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3290605.3300842
[59]
Alexander Mariakakis, Mayank Goel, Md Tanvir Islam Aumi, Shwetak N. Patel, and Jacob O. Wobbrock. 2015. SwitchBack: Using Focus and Saccade Tracking to Guide Users’ Attention for Mobile Task Resumption. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (Seoul, Republic of Korea) (CHI ’15). Association for Computing Machinery, New York, NY, USA, 2953–2962. https://doi.org/10.1145/2702123.2702539
[60]
Emilie Mollenbach, John Paulin Hansen, Martin Lillholm, and Alastair G. Gale. 2009. Single Stroke Gaze Gestures. In CHI ’09 Extended Abstracts on Human Factors in Computing Systems (Boston, MA, USA) (CHI EA ’09). Association for Computing Machinery, New York, NY, USA, 4555–4560. https://doi.org/10.1145/1520340.1520699
[61]
Emilie Møllenbach, Martin Lillholm, Alastair Gail, and John Paulin Hansen. 2010. Single Gaze Gestures. In Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications (Austin, Texas) (ETRA ’10). Association for Computing Machinery, New York, NY, USA, 177–180. https://doi.org/10.1145/1743666.1743710
[62]
Kevin Pfeffel, Philipp Ulsamer, and Nicholas H. Müller. 2019. Where the User Does Look When Reading Phishing Mails – An Eye-Tracking Study. In Learning and Collaboration Technologies. Designing Learning Experiences, Panayiotis Zaphiris and Andri Ioannou (Eds.). Springer International Publishing, Cham, 277–287.
[63]
Ken Pfeuffer, Jason Alexander, Ming Ki Chong, and Hans Gellersen. 2014. Gaze-Touch: Combining Gaze with Multi-Touch for Interaction on the Same Surface. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology(Honolulu, Hawaii, USA) (UIST ’14). Association for Computing Machinery, New York, NY, USA, 509–518. https://doi.org/10.1145/2642918.2647397
[64]
Ken Pfeuffer, Jason Alexander, Ming Ki Chong, Yanxia Zhang, and Hans Gellersen. 2015. Gaze-Shifting: Direct-Indirect Input with Pen and Touch Modulated by Gaze. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology(Charlotte, NC, USA) (UIST ’15). Association for Computing Machinery, New York, NY, USA, 373–383. https://doi.org/10.1145/2807442.2807460
[65]
Ken Pfeuffer, Benedikt Mayer, Diako Mardanbegi, and Hans Gellersen. 2017. Gaze + Pinch Interaction in Virtual Reality. In Proceedings of the 5th Symposium on Spatial User Interaction (Brighton, United Kingdom) (SUI ’17). Association for Computing Machinery, New York, NY, USA, 99–108. https://doi.org/10.1145/3131277.3132180
[66]
Carmelo Pino and Isaak Kavasidis. 2012. Improving mobile device interaction by eye tracking analysis. In 2012 Federated Conference on Computer Science and Information Systems (FedCSIS). 1199–1202.
[67]
Thammathip Piumsomboon, Gun Lee, Robert W. Lindeman, and Mark Billinghurst. 2017. Exploring natural eye-gaze-based interaction for immersive virtual reality. In 2017 IEEE Symposium on 3D User Interfaces (3DUI). 36–39. https://doi.org/10.1109/3DUI.2017.7893315
[68]
Helen C Purchase. 2012. Experimental human-computer interaction: a practical guide with visual examples. Cambridge University Press.
[69]
Vijay Rajanna, Adil Hamid Malla, Rahul Ashok Bhagat, and Tracy Hammond. 2018. DyGazePass: A gaze gesture-based dynamic authentication system to counter shoulder surfing and video analysis attacks. In 2018 IEEE 4th International Conference on Identity, Security, and Behavior Analysis (ISBA). 1–8. https://doi.org/10.1109/ISBA.2018.8311458
[70]
Radiah Rivu, Yasmeen Abdrabou, Ken Pfeuffer, Augusto Esteves, Stefanie Meitner, and Florian Alt. 2020. StARe: Gaze-Assisted Face-to-Face Communication in Augmented Reality. In ACM Symposium on Eye Tracking Research and Applications (Stuttgart, Germany) (ETRA ’20 Adjunct). Association for Computing Machinery, New York, NY, USA, Article 14, 5 pages. https://doi.org/10.1145/3379157.3388930
[71]
Sheikh Rivu, Yasmeen Abdrabou, Thomas Mayer, Ken Pfeuffer, and Florian Alt. 2019. GazeButton: Enhancing Buttons with Eye Gaze Interactions. In Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications (Denver, Colorado) (ETRA ’19). Association for Computing Machinery, New York, NY, USA, Article 73, 7 pages. https://doi.org/10.1145/3317956.3318154
[72]
Alia Saad, Dina Hisham Elkafrawy, Slim Abdennadher, and Stefan Schneegass. 2020. Are They Actually Looking? Identifying Smartphones Shoulder Surfing Through Gaze Estimation. In ACM Symposium on Eye Tracking Research and Applications (Stuttgart, Germany) (ETRA ’20 Adjunct). Association for Computing Machinery, New York, NY, USA, Article 42, 3 pages. https://doi.org/10.1145/3379157.3391422
[73]
Selina Sharmin, Oleg Špakov, and Kari-Jouko Räihä. 2013. Reading On-Screen Text with Gaze-Based Auto-Scrolling. In Proceedings of the 2013 Conference on Eye Tracking South Africa (Cape Town, South Africa) (ETSA ’13). Association for Computing Machinery, New York, NY, USA, 24–31. https://doi.org/10.1145/2509315.2509319
[74]
Chen Song, Aosen Wang, Kui Ren, and Wenyao Xu. 2016. EyeVeri: A secure and usable approach for smartphone user authentication. In IEEE INFOCOM 2016 - The 35th Annual IEEE International Conference on Computer Communications. 1–9. https://doi.org/10.1109/INFOCOM.2016.7524367
[75]
FM Toates. 1974. Vergence eye movements. Documenta Ophthalmologica 37, 1 (1974), 153–214.
[76]
Jayson Turner, Andreas Bulling, Jason Alexander, and Hans Gellersen. 2014. Cross-device Gaze-supported Point-to-point Content Transfer. In Proceedings of the Symposium on Eye Tracking Research and Applications (Safety Harbor, Florida) (ETRA ’14). ACM, New York, NY, USA, 19–26. https://doi.org/10.1145/2578153.2578155
[77]
Eduardo Velloso, Marcus Carter, Joshua Newn, Augusto Esteves, Christopher Clarke, and Hans Gellersen. 2017. Motion Correlation: Selecting Objects by Matching Their Movement. ACM Trans. Comput.-Hum. Interact. 24, 3, Article 22 (apr 2017), 35 pages. https://doi.org/10.1145/3064937
[78]
Mélodie Vidal, Andreas Bulling, and Hans Gellersen. 2013. Pursuits: Spontaneous Interaction with Displays Based on Smooth Pursuit Eye Movement and Moving Targets. In Proceedings of the 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing (Zurich, Switzerland) (UbiComp ’13). Association for Computing Machinery, New York, NY, USA, 439–448. https://doi.org/10.1145/2493432.2493477
[79]
Mélodie Vidal, Ken Pfeuffer, Andreas Bulling, and Hans W. Gellersen. 2013. Pursuits: Eye-Based Interaction with Moving Targets. In CHI ’13 Extended Abstracts on Human Factors in Computing Systems (Paris, France) (CHI EA ’13). Association for Computing Machinery, New York, NY, USA, 3147–3150. https://doi.org/10.1145/2468356.2479632
[80]
Erroll Wood and Andreas Bulling. 2014. EyeTab: Model-Based Gaze Estimation on Unmodified Tablet Computers. In Proceedings of the Symposium on Eye Tracking Research and Applications (Safety Harbor, Florida) (ETRA ’14). Association for Computing Machinery, New York, NY, USA, 207–210. https://doi.org/10.1145/2578153.2578185
[81]
Yanxia Zhang, Ken Pfeuffer, Ming Ki Chong, Jason Alexander, Andreas Bulling, and Hans Gellersen. 2017. Look together: using gaze for assisting co-located collaborative search. Personal and Ubiquitous Computing 21, 1 (2017), 173–186.
[82]
Hui Zheng and Vivian Genaro Motti. 2018. Assisting Students with Intellectual and Developmental Disabilities in Inclusive Education with Smartwatches. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3173574.3173924
[83]
Huiyuan Zhou, Vinicius Ferreira, Thamara Alves, Kirstie Hawkey, and Derek Reilly. 2015. Somebody Is Peeking! A Proximity and Privacy Aware Tablet Interface. In Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems (Seoul, Republic of Korea) (CHI EA ’15). Association for Computing Machinery, New York, NY, USA, 1971–1976. https://doi.org/10.1145/2702613.2732726

Cited By

View all
  • (2024)PrivateGaze: Preserving User Privacy in Black-box Mobile Gaze Tracking ServicesProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36785958:3(1-28)Online publication date: 9-Sep-2024
  • (2024)Exploring Bi-Manual Teleportation in Virtual Reality2024 IEEE Conference Virtual Reality and 3D User Interfaces (VR)10.1109/VR58804.2024.00095(754-764)Online publication date: 16-Mar-2024
  • (2024)Design recommendations of target size and tracking speed under circular and square trajectories for smooth pursuit with Euclidean algorithm in eye-control systemDisplays10.1016/j.displa.2023.10260881(102608)Online publication date: Jan-2024
  • Show More Cited By

Index Terms

  1. Comparing Dwell time, Pursuits and Gaze Gestures for Gaze Interaction on Handheld Mobile Devices

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CHI '23: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems
    April 2023
    14911 pages
    ISBN:9781450394215
    DOI:10.1145/3544548
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 19 April 2023

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Eye Tracking
    2. Gaze-based Interaction
    3. Smartphones
    4. Smooth pursuit
    5. Tablets

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Funding Sources

    Conference

    CHI '23
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

    Upcoming Conference

    CHI '25
    CHI Conference on Human Factors in Computing Systems
    April 26 - May 1, 2025
    Yokohama , Japan

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)567
    • Downloads (Last 6 weeks)90
    Reflects downloads up to 16 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)PrivateGaze: Preserving User Privacy in Black-box Mobile Gaze Tracking ServicesProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36785958:3(1-28)Online publication date: 9-Sep-2024
    • (2024)Exploring Bi-Manual Teleportation in Virtual Reality2024 IEEE Conference Virtual Reality and 3D User Interfaces (VR)10.1109/VR58804.2024.00095(754-764)Online publication date: 16-Mar-2024
    • (2024)Design recommendations of target size and tracking speed under circular and square trajectories for smooth pursuit with Euclidean algorithm in eye-control systemDisplays10.1016/j.displa.2023.10260881(102608)Online publication date: Jan-2024
    • (2024)Research on a spatial–temporal characterisation of blink-triggered eye control interactionsAdvanced Engineering Informatics10.1016/j.aei.2023.10229759:COnline publication date: 2-Jul-2024
    • (2023)Investigating Privacy Perceptions and Subjective Acceptance of Eye Tracking on Handheld Mobile DevicesProceedings of the ACM on Human-Computer Interaction10.1145/35911337:ETRA(1-16)Online publication date: 18-May-2023
    • (2023)Affordance-Guided User Elicitation of Interaction Concepts for Unimodal Gaze Control of Potential Holographic 3D UIs in Automotive Applications2023 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)10.1109/ISMAR-Adjunct60411.2023.00011(14-19)Online publication date: 16-Oct-2023

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    Full Text

    HTML Format

    View this article in HTML Format.

    HTML Format

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media