Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3652920.3652941acmotherconferencesArticle/Chapter ViewAbstractPublication PagesahsConference Proceedingsconference-collections
research-article
Open access

GestureMark: Shortcut Input Technique using Smartwatch Touch Gestures for XR Glasses

Published: 01 May 2024 Publication History

Abstract

We propose GestureMark, a novel input technique for target selection on XR glasses using smartwatch touch gestures as input. As XR glasses get smaller and lighter, their usage increases rapidly, leading to a higher demand for efficient shortcuts for everyday life. We explored the uses of gesture input on smartwatch touchscreen, including simple swipe, swipe combinations, and bezel-to-bezel (B2B) gesture as an input modality. Through an experiment with 16 participants, we found that while swipe gestures were efficient for four-choice selections, B2B was superior for 16-choice inputs. Feedback mechanisms did not enhance performance but reduced perceived workload. Our findings highlight the potential of integrating smartwatches as secondary input devices for XR glasses.

Supplemental Material

MP4 File
Video

References

[1]
Sunggeun Ahn, Seongkook Heo, and Geehyuk Lee. 2017. Typing on a Smartwatch for Smart Glasses. In Proceedings of the 2017 ACM International Conference on Interactive Surfaces and Spaces (Brighton, United Kingdom) (ISS ’17). Association for Computing Machinery, New York, NY, USA, 201–209. https://doi.org/10.1145/3132272.3134136
[2]
Sunggeun Ahn, Stephanie Santosa, Mark Parent, Daniel Wigdor, Tovi Grossman, and Marcello Giordano. 2021. StickyPie: A Gaze-Based, Scale-Invariant Marking Menu Optimized for AR/VR. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 739, 16 pages. https://doi.org/10.1145/3411764.3445297
[3]
Daniel Ashbrook, Kent Lyons, and Thad Starner. 2008. An Investigation into Round Touchscreen Wristwatch Interaction. In Proceedings of the 10th International Conference on Human Computer Interaction with Mobile Devices and Services (Amsterdam, The Netherlands) (MobileHCI ’08). Association for Computing Machinery, New York, NY, USA, 311–314. https://doi.org/10.1145/1409240.1409276
[4]
Sandra Bardot, Bradley Rey, Lucas Audette, Kevin Fan, Da-Yuan Huang, Jun Li, Wei Li, and Pourang Irani. 2022. One Ring to Rule Them All: An Empirical Understanding of Day-to-Day Smartring Usage Through In-Situ Diary Study. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 6, 3 (2022), 1–20. https://doi.org/10.1145/3550315
[5]
Olivier Bau and Wendy E. Mackay. 2008. OctoPocus: a dynamic guide for learning gesture-based command sets. In Proceedings of the 21st Annual ACM Symposium on User Interface Software and Technology (Monterey, CA, USA) (UIST ’08). Association for Computing Machinery, New York, NY, USA, 37–46. https://doi.org/10.1145/1449715.1449724
[6]
Eugenie Brasier, Olivier Chapuis, Nicolas Ferey, Jeanne Vezien, and Caroline Appert. 2020. ARPads: Mid-air Indirect Input for Augmented Reality. In 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) (Porto de Galinhas, Brazil). IEEE, 332–343. https://doi.org/10.1109/ISMAR50242.2020.00060
[7]
M. Camilleri, B. Chu, A. Ramesh, D. Odell, and D. Rempel. 2012. Indirect Touch Pointing with Desktop Computing: Effects of Trackpad Size and Input mapping on Performance, Posture, Discomfort, and Preference. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 56, 1 (2012), 1114–1118. https://doi.org/10.1177/1071181312561242
[8]
Haiming Cheng, Wei Lou, Yanni Yang, Yi-pu Chen, and Xinyu Zhang. 2023. TwinkleTwinkle. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 7, 2 (2023), 1–30. https://doi.org/10.1145/3596238
[9]
Katherine Fennedy, Jeremy Hartmann, Quentin Roy, Simon Tangi Perrault, and Daniel Vogel. 2021. OctoPocus in VR: Using a Dynamic Guide for 3D Mid-Air Gestures in Virtual Reality. IEEE Transactions on Visualization and Computer Graphics 27, 12 (2021), 4425–4438. https://doi.org/10.1109/tvcg.2021.3101854
[10]
Bruno Fruchard, Eric Lecolinet, and Olivier Chapuis. 2017. MarkPad: Augmenting Touchpads for Command Selection. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (Denver, Colorado, USA) (CHI ’17). Association for Computing Machinery, New York, NY, USA, 5630–5642. https://doi.org/10.1145/3025453.3025486
[11]
Bruno Fruchard, Eric Lecolinet, and Olivier Chapuis. 2020. Side-Crossing Menus: Enabling Large Sets of Gestures for Small Surfaces. Proc. ACM Hum.-Comput. Interact. 4, ISS, Article 189 (nov 2020), 19 pages. https://doi.org/10.1145/3427317
[12]
Jérémie Gilliot, Géry Casiez, and Nicolas Roussel. 2014. Impact of form factors and input conditions on absolute indirect-touch pointing tasks. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Toronto, Ontario, Canada) (CHI ’14). Association for Computing Machinery, New York, NY, USA, 723–732. https://doi.org/10.1145/2556288.2556997
[13]
Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H. Witten. 2009. The WEKA Data Mining Software: An Update. SIGKDD Explor. Newsl. 11, 1 (nov 2009), 10–18. https://doi.org/10.1145/1656274.1656278
[14]
Sandra G. Hart. 2006. Nasa-Task Load Index (NASA-TLX); 20 Years Later. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 50, 9 (2006), 904–908. https://doi.org/10.1177/154193120605000909 arXiv:https://doi.org/10.1177/154193120605000909
[15]
Anuradha Herath, Bradley Rey, Sandra Bardot, Sawyer Rempel, Lucas Audette, Huizhe Zheng, Jun Li, Kevin Fan, Da-Yuan Huang, Wei Li, and Pourang Irani. 2022. Expanding Touch Interaction Capabilities for Smart-rings: An Exploration of Continual Slide and Microroll Gestures. In Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI EA ’22). Association for Computing Machinery, New York, NY, USA, Article 292, 7 pages. https://doi.org/10.1145/3491101.3519714
[16]
Yi-Ta Hsieh, Antti Jylhä, Valeria Orso, Luciano Gamberini, and Giulio Jacucci. 2016. Designing a Willing-to-Use-in-Public Hand Gestural Interaction Technique for Smart Glasses. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (San Jose, California, USA) (CHI ’16). Association for Computing Machinery, New York, NY, USA, 4203–4215. https://doi.org/10.1145/2858036.2858436
[17]
MD. Rasel Islam, Doyoung Lee, Liza Suraiya Jahan, and Ian Oakley. 2018. GlassPass: Tapping Gestures to Unlock Smart Glasses. In Proceedings of the 9th Augmented Human International Conference (Seoul, Republic of Korea) (AH ’18). Association for Computing Machinery, New York, NY, USA, Article 16, 8 pages. https://doi.org/10.1145/3174910.3174936
[18]
Seoyoung Kang, Emmanuel Ian Libao, Juyoung Lee, and Woontack Woo. 2022. DirectionQ: Continuous Mid-air Hand Input for Selecting Multiple Targets through Directional Visual Cues. In 2022 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). IEEE, 757–762. https://doi.org/10.1109/ISMAR-Adjunct57072.2022.00160
[19]
Daehwa Kim and Chris Harrison. 2022. EtherPose: Continuous Hand Pose Tracking with Wrist-Worn Antenna Impedance Characteristic Sensing. In Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology (Bend, OR, USA) (UIST ’22). Association for Computing Machinery, New York, NY, USA, Article 58, 12 pages. https://doi.org/10.1145/3526113.3545665
[20]
Taejun Kim, Auejin Ham, Sunggeun Ahn, and Geehyuk Lee. 2022. Lattice Menu: A Low-Error Gaze-Based Marking Menu Utilizing Target-Assisted Gaze Gestures on a Lattice of Visual Anchors. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (, New Orleans, LA, USA, ) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 277, 12 pages. https://doi.org/10.1145/3491102.3501977
[21]
Woojoo Kim and Shuping Xiong. 2022. ViewfinderVR: configurable viewfinder for selection of distant objects in VR. Virtual Reality 26, 4 (2022), 1573–1592. https://doi.org/10.1007/s10055-022-00649-z arXiv:2110.02514
[22]
Yuki Kubo, Buntarou Shizuki, and Jiro Tanaka. 2016. B2B-Swipe: Swipe Gesture for Rectangular Smartwatches from a Bezel to a Bezel. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (San Jose, California, USA) (CHI ’16). Association for Computing Machinery, New York, NY, USA, 3852–3856. https://doi.org/10.1145/2858036.2858216
[23]
Chen Liang, Chi Hsia, Chun Yu, Yukang Yan, Yuntao Wang, and Yuanchun Shi. 2023. DRG-Keyboard: Enabling Subtle Gesture Typing on the Fingertip with Dual IMU Rings. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 6, 4, Article 170 (jan 2023), 30 pages. https://doi.org/10.1145/3569463
[24]
Google LLC. 2021. Inputs and sensors | Glass Enterprise Edition 2 Google for Developers — developers.google.com. https://developers.google.com/glass-enterprise/guides/inputs-sensors. [Accessed 08-01-2024].
[25]
Ali Neshati, Bradley Rey, Ahmed Shariff Mohommed Faleel, Sandra Bardot, Celine Latulipe, and Pourang Irani. 2021. BezelGlide: Interacting with Graphs on Smartwatches with Minimal Screen Occlusion. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 501, 13 pages. https://doi.org/10.1145/3411764.3445201
[26]
Donald A. Norman. 2010. Natural User Interfaces Are Not Natural. Interactions 17, 3 (may 2010), 6–10. https://doi.org/10.1145/1744161.1744163
[27]
Simon T. Perrault, Eric Lecolinet, James Eagan, and Yves Guiard. 2013. Watchit: Simple Gestures and Eyes-Free Interaction for Wristwatches and Bracelets. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Paris, France) (CHI ’13). Association for Computing Machinery, New York, NY, USA, 1451–1460. https://doi.org/10.1145/2470654.2466192
[28]
Manuel Prätorius, Dimitar Valkov, Ulrich Burgbacher, and Klaus Hinrichs. 2014. DigiTap: an eyes-free VR/AR symbolic input device. In Proceedings of the 20th ACM Symposium on Virtual Reality Software and Technology (Edinburgh, Scotland) (VRST ’14). Association for Computing Machinery, New York, NY, USA, 9–18. https://doi.org/10.1145/2671015.2671029
[29]
Gang Ren and Eamonn O’Neill. 2012. 3D Marking menu selection with freehand gestures. In 2012 IEEE Symposium on 3D User Interfaces (3DUI). IEEE, 61–68. https://doi.org/10.1109/3DUI.2012.6184185
[30]
Bradley Rey, Kening Zhu, Simon Tangi Perrault, Sandra Bardot, Ali Neshati, and Pourang Irani. 2022. Understanding and Adapting Bezel-to-Bezel Interactions for Circular Smartwatches in Mobile and Encumbered Scenarios. Proceedings of the ACM on Human-Computer Interaction 6, MHCI (2022), 1–28. https://doi.org/10.1145/3546736
[31]
David Verweij, Augusto Esteves, Saskia Bakker, and Vassilis-Javed Khan. 2019. Designing Motion Matching for Real-World Applications: Lessons from Realistic Deployments. In Proceedings of the Thirteenth International Conference on Tangible, Embedded, and Embodied Interaction (Tempe, Arizona, USA) (TEI ’19). Association for Computing Machinery, New York, NY, USA, 645–656. https://doi.org/10.1145/3294109.3295628
[32]
Jérémy Wambecke, Alix Goguey, Laurence Nigay, Lauren Dargent, Daniel Hauret, Stéphanie Lafon, and Jean-Samuel Louis de Visme. 2021. M[eye]cro. Proceedings of the ACM on Human-Computer Interaction 5, EICS (2021), 1–22. https://doi.org/10.1145/3461732
[33]
Pui Chung Wong, Kening Zhu, Xing-Dong Yang, and Hongbo Fu. 2020. Exploring Eyes-Free Bezel-Initiated Swipe on Round Smartwatches. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–11. https://doi.org/10.1145/3313831.3376393
[34]
Erwin Wu, Ye Yuan, Hui-Shyong Yeo, Aaron Quigley, Hideki Koike, and Kris M. Kitani. 2020. Back-Hand-Pose: 3D Hand Pose Estimation for a Wrist-worn Camera via Dorsum Deformation Network. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology (Virtual Event, USA) (UIST ’20). Association for Computing Machinery, New York, NY, USA, 1147–1160. https://doi.org/10.1145/3379337.3415897
[35]
Zheer Xu, Weihao Chen, Dongyang Zhao, Jiehui Luo, Te-Yen Wu, Jun Gong, Sicheng Yin, Jialun Zhai, and Xing-Dong Yang. 2020. BiTipText: Bimanual Eyes-Free Text Entry on a Fingertip Keyboard. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3313831.3376306
[36]
Yukang YAN, Xin YI, Chun YU, and Yuanchun SHI. 2019. Gesture-based target acquisition in virtual and augmented reality. Virtual Reality & Intelligent Hardware 1, 3 (2019), 276–289. https://doi.org/10.3724/SP.J.2096-5796.2019.0007
[37]
Koji Yatani, Kurt Partridge, Marshall Bern, and Mark W. Newman. 2008. Escape: a target selection technique using visually-cued gestures. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy) (CHI ’08). Association for Computing Machinery, New York, NY, USA, 285–294. https://doi.org/10.1145/1357054.1357104
[38]
Hui-Shyong Yeo, Erwin Wu, Juyoung Lee, Aaron Quigley, and Hideki Koike. 2019. Opisthenar: Hand Poses and Finger Tapping Recognition by Observing Back of Hand Using Embedded Wrist Camera. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology (New Orleans, LA, USA) (UIST ’19). Association for Computing Machinery, New York, NY, USA, 963–971. https://doi.org/10.1145/3332165.3347867
[39]
Maozheng Zhao, Alec M Pierce, Ran Tan, Ting Zhang, Tianyi Wang, Tanya R. Jonker, Hrvoje Benko, and Aakar Gupta. 2023. Gaze Speedup: Eye Gaze Assisted Gesture Typing in Virtual Reality. In Proceedings of the 28th International Conference on Intelligent User Interfaces (Sydney, NSW, Australia) (IUI ’23). Association for Computing Machinery, New York, NY, USA, 595–606. https://doi.org/10.1145/3581641.3584072
[40]
Shengdong Zhao and Ravin Balakrishnan. 2004. Simple vs. compound mark hierarchical marking menus. In Proceedings of the 17th Annual ACM Symposium on User Interface Software and Technology (Santa Fe, NM, USA) (UIST ’04). Association for Computing Machinery, New York, NY, USA, 33–42. https://doi.org/10.1145/1029632.1029639

Index Terms

  1. GestureMark: Shortcut Input Technique using Smartwatch Touch Gestures for XR Glasses

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Other conferences
      AHs '24: Proceedings of the Augmented Humans International Conference 2024
      April 2024
      355 pages
      ISBN:9798400709807
      DOI:10.1145/3652920
      This work is licensed under a Creative Commons Attribution International 4.0 License.

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 01 May 2024

      Check for updates

      Author Tags

      1. XR glasses
      2. bezel gestures
      3. marking menu
      4. smartwatch input

      Qualifiers

      • Research-article
      • Research
      • Refereed limited

      Conference

      AHs 2024
      AHs 2024: The Augmented Humans International Conference
      April 4 - 6, 2024
      VIC, Melbourne, Australia

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 329
        Total Downloads
      • Downloads (Last 12 months)329
      • Downloads (Last 6 weeks)67
      Reflects downloads up to 29 Nov 2024

      Other Metrics

      Citations

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Login options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media