Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Exploring User Defined Gestures for Ear-Based Interactions

Published: 04 November 2020 Publication History

Abstract

The human ear is highly sensitive and accessible, making it especially suitable for being used as an interface for interacting with smart earpieces or augmented glasses. However, previous works on ear-based input mainly address gesture sensing technology and researcher-designed gestures. This paper aims to bring more understandings of gesture design. Thus, for a user elicitation study, we recruited 28 participants, each of whom designed gestures for 31 smart device-related tasks. This resulted in a total of 868 gestures generated. Upon the basis of these gestures, we compiled a taxonomy and concluded the considerations underlying the participants' designs that also offer insights into their design rationales and preferences. Thereafter, based on these study results, we propose a set of user-defined gestures and share interesting findings. We hope this work can shed some light on not only sensing technologies of ear-based input, but also the interface design of future wearable interfaces.

References

[1]
Toshiyuki Ando, Yuki Kubo, Buntarou Shizuki, and Shin Takahashi. 2017. CanalSense: Face-Related Movement Recognition System Based on Sensing Air Pressure in Ear Canals. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology (Québec City, QC, Canada) (UIST '17). ACM, New York, NY, USA, 679--689. https://doi.org/10.1145/3126594.3126649
[2]
Shaikh Shawon Arefin Shimon, Courtney Lutton, Zichun Xu, Sarah Morrison-Smith, Christina Boucher, and Jaime Ruiz. 2016. Exploring Non-touchscreen Gestures for Smartwatches. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (San Jose, California, USA) (CHI '16). ACM, New York, NY, USA, 3822--3833. https://doi.org/10.1145/2858036.2858385
[3]
Daniel Ashbrook, Carlos Tejada, Dhwanit Mehta, Anthony Jiminez, Goudam Muralitharam, Sangeeta Gajendra, and Ross Tallents. 2016. Bitey: An Exploration of Tooth Click Gestures for Hands-Free User Interface Control. In Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services (Florence, Italy) (MobileHCI '16). Association for Computing Machinery, New York, NY, USA, 158--169. https://doi.org/10.1145/2935334.2935389
[4]
Roger Boldu, Alexandru Dancu, Denys J.C. Matthies, Pablo Gallego Cascón, Shanaka Ransir, and Suranga Nanayakkara. 2018. Thumb-In-Motion: Evaluating Thumb-to-Ring Microgestures for Athletic Activity. In Proceedings of the Symposium on Spatial User Interaction (Berlin, Germany) (SUI '18). Association for Computing Machinery, New York, NY, USA, 150--157. https://doi.org/10.1145/3267782.3267796
[5]
Idil Bostan, Ouguz Turan Buruk, Mert Canat, Mustafa Ozan Tezcan, Celalettin Yurdakul, Tilbe Göksun, and Ouguzhan Özcan. 2017. Hands As a Controller: User Preferences for Hand Specific On-Skin Gestures. In Proceedings of the 2017 Conference on Designing Interactive Systems (Edinburgh, United Kingdom) (DIS '17). ACM, New York, NY, USA, 1123--1134. https://doi.org/10.1145/3064663.3064766
[6]
Edwin Chan, Teddy Seyed, Wolfgang Stuerzlinger, Xing-Dong Yang, and Frank Maurer. 2016. User Elicitation on Single-hand Microgestures. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (San Jose, California, USA) (CHI '16). ACM, New York, NY, USA, 3403--3414. https://doi.org/10.1145/2858036.2858589
[7]
Julien Epps, Serge Lichman, and Mike Wu. 2006. A Study of Hand Shape Use in Tabletop Gesture Interaction. In CHI '06 Extended Abstracts on Human Factors in Computing Systems (Montréal, Québec, Canada) (CHI EA '06). ACM, New York, NY, USA, 748--753. https://doi.org/10.1145/1125451.1125601
[8]
Chris Harrison, Hrvoje Benko, and Andrew D. Wilson. 2011. OmniTouch: Wearable Multitouch Interaction Everywhere. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology (Santa Barbara, California, USA) (UIST '11). ACM, New York, NY, USA, 441--450. https://doi.org/10.1145/2047196.2047255
[9]
Da-Yuan Huang, Liwei Chan, Shuo Yang, Fan Wang, Rong-Hao Liang, De-Nian Yang, Yi-Ping Hung, and Bing-Yu Chen. 2016. DigitSpace: Designing Thumb-to-Fingers Touch Interfaces for One-Handed and Eyes-Free Interactions. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (San Jose, California, USA) (CHI '16). ACM, New York, NY, USA, 1526--1537. https://doi.org/10.1145/2858036.2858483
[10]
Edwin L. Hutchins, James D. Hollan, and Donald A. Norman. 1985. Direct Manipulation Interfaces. Hum.-Comput. Interact., Vol. 1, 4 (Dec. 1985), 311--338. https://doi.org/10.1207/s15327051hci0104_2
[11]
Hsin-Liu (Cindy) Kao, Artem Dementyev, Joseph A. Paradiso, and Chris Schmandt. 2015. NailO: Fingernails As an Input Surface. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (Seoul, Republic of Korea) (CHI '15). ACM, New York, NY, USA, 3015--3018. https://doi.org/10.1145/2702123.2702572
[12]
Maria Karam and m. c. schraefel. 2005. A Taxonomy of Gestures in Human Computer Interactions. (2005). https://eprints.soton.ac.uk/261149/
[13]
Frederic Kerber, Markus Löchtefeld, Antonio Krüger, Jess McIntosh, Charlie McNeill, and Mike Fraser. 2016. Understanding Same-Side Interactions with Wrist-Worn Devices. In Proceedings of the 9th Nordic Conference on Human-Computer Interaction (Gothenburg, Sweden) (NordiCHI '16). Association for Computing Machinery, New York, NY, USA, Article 28, bibinfonumpages10 pages. https://doi.org/10.1145/2971485.2971519
[14]
Takashi Kikuchi, Yuta Sugiura, Katsutoshi Masai, Maki Sugimoto, and Bruce H. Thomas. 2017. EarTouch: Turning the Ear into an Input Surface. In Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services (Vienna, Austria) (MobileHCI '17). ACM, New York, NY, USA, Article 27, bibinfonumpages6 pages. https://doi.org/10.1145/3098279.3098538
[15]
Mary L McHugh. 2012. Interrater reliability: The kappa statistic. Biochemia medica : vc asopis Hrvatskoga druvs tva medicinskih biokemivc ara / HDMB, Vol. 22 (10 2012), 276--82. https://doi.org/10.11613/BM.2012.031
[16]
DoYoung Lee, Youryang Lee, Yonghwan Shin, and Ian Oakley. 2018. Designing Socially Acceptable Hand-to-Face Input. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology (Berlin, Germany) (UIST '18). ACM, New York, NY, USA, 711--723. https://doi.org/10.1145/3242587.3242642
[17]
Jaime Lien, Nicholas Gillian, M. Emre Karagozler, Patrick Amihood, Carsten Schwesig, Erik Olson, Hakim Raja, and Ivan Poupyrev. 2016. Soli: Ubiquitous Gesture Sensing with Millimeter Wave Radar. ACM Trans. Graph., Vol. 35, 4, Article 142 (July 2016), bibinfonumpages19 pages. https://doi.org/10.1145/2897824.2925953
[18]
Roman Lissermann, Jochen Huber, Aristotelis Hadjakos, Suranga Nanayakkara, and Max Mühlh"auser. 2014. EarPut: Augmenting Ear-worn Devices for Ear-based Interaction. In Proceedings of the 26th Australian Computer-Human Interaction Conference on Designing Futures: The Future of Design (Sydney, New South Wales, Australia) (OzCHI '14). ACM, New York, NY, USA, 300--307. https://doi.org/10.1145/2686612.2686655
[19]
Denys J. C. Matthies, Simon T. Perrault, Bodo Urban, and Shengdong Zhao. 2015. Botential: Localizing On-Body Gestures by Measuring Electrical Signatures on the Human Skin. In Proceedings of the 17th International Conference on Human-Computer Interaction with Mobile Devices and Services (Copenhagen, Denmark) (MobileHCI '15). Association for Computing Machinery, New York, NY, USA, 207--216. https://doi.org/10.1145/2785830.2785859
[20]
Denys J. C. Matthies, Bernhard A. Strecker, and Bodo Urban. 2017. EarFieldSensing: A Novel In-Ear Electric Field Sensing to Enrich Wearable Gesture Input through Facial Expressions. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (Denver, Colorado, USA) (CHI '17). Association for Computing Machinery, New York, NY, USA, 1911--1922. https://doi.org/10.1145/3025453.3025692
[21]
Erin McAweeney, Haihua Zhang, and Michael Nebeling. 2018. User-Driven Design Principles for Gesture Representations. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI '18). ACM, New York, NY, USA, Article 547, bibinfonumpages13 pages. https://doi.org/10.1145/3173574.3174121
[22]
C. Metzger, M. Anderson, and T. Starner. 2004. FreeDigiter: a contact-free device for gesture control. In Eighth International Symposium on Wearable Computers, Vol. 1. 18--21. https://doi.org/10.1109/ISWC.2004.23
[23]
Meredith Ringel Morris, Jacob O. Wobbrock, and Andrew D. Wilson. 2010. Understanding Users' Preferences for Surface Gestures. In Proceedings of Graphics Interface 2010 (Ottawa, Ontario, Canada) (GI '10). Canadian Information Processing Society, Toronto, Ont., Canada, Canada, 261--268. http://dl.acm.org/citation.cfm?id=1839214.1839260
[24]
Miguel A. Nacenta, Yemliha Kamber, Yizhou Qiang, and Per Ola Kristensson. 2013. Memorability of Pre-designed and User-defined Gesture Sets. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Paris, France) (CHI '13). ACM, New York, NY, USA, 1099--1108. https://doi.org/10.1145/2470654.2466142
[25]
Michael Nielsen, Moritz Störring, Thomas B. Moeslund, and Erik Granum. 2004. A Procedure for Developing Intuitive and Ergonomic Gesture Interfaces for HCI. In Gesture-Based Communication in Human-Computer Interaction, Antonio Camurri and Gualtiero Volpe (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 409--420.
[26]
Thammathip Piumsomboon, Adrian Clark, Mark Billinghurst, and Andy Cockburn. 2013. User-defined Gestures for Augmented Reality. In CHI '13 Extended Abstracts on Human Factors in Computing Systems (Paris, France) (CHI EA '13). ACM, New York, NY, USA, 955--960. https://doi.org/10.1145/2468356.2468527
[27]
Jun Rekimoto. 2001. GestureWrist and GesturePad: Unobtrusive Wearable Interaction Devices. In Proceedings of the 5th IEEE International Symposium on Wearable Computers (ISWC '01). IEEE Computer Society, Washington, DC, USA, 21--. http://dl.acm.org/citation.cfm?id=580581.856565
[28]
Julie Rico and Stephen Brewster. 2010. Usable Gestures for Mobile Interfaces: Evaluating Social Acceptability. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Atlanta, Georgia, USA) (CHI '10). ACM, New York, NY, USA, 887--896. https://doi.org/10.1145/1753326.1753458
[29]
Jaime Ruiz, Yang Li, and Edward Lank. 2011. User-defined Motion Gestures for Mobile Interaction. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Vancouver, BC, Canada) (CHI '11). ACM, New York, NY, USA, 197--206. https://doi.org/10.1145/1978942.1978971
[30]
T. Scott Saponas, Desney S. Tan, Dan Morris, Ravin Balakrishnan, Jim Turner, and James A. Landay. 2009. Enabling Always-available Input with Muscle-computer Interfaces. In Proceedings of the 22Nd Annual ACM Symposium on User Interface Software and Technology (Victoria, BC, Canada) (UIST '09). ACM, New York, NY, USA, 167--176. https://doi.org/10.1145/1622176.1622208
[31]
Douglas Schuler and Aki Namioka (Eds.). 1993. Participatory Design: Principles and Practices .L. Erlbaum Associates Inc., Hillsdale, NJ, USA.
[32]
Julia Schwarz, Chris Harrison, Scott Hudson, and Jennifer Mankoff. 2010. Cord Input: An Intuitive, High-accuracy, Multi-degree-of-freedom Input Method for Mobile Devices. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Atlanta, Georgia, USA) (CHI '10). ACM, New York, NY, USA, 1657--1660. https://doi.org/10.1145/1753326.1753573
[33]
Marcos Serrano, Barrett M. Ens, and Pourang P. Irani. 2014. Exploring the Use of Hand-to-face Input for Interacting with Head-worn Displays. In Proceedings of the 32Nd Annual ACM Conference on Human Factors in Computing Systems (Toronto, Ontario, Canada) (CHI '14). ACM, New York, NY, USA, 3181--3190. https://doi.org/10.1145/2556288.2556984
[34]
Teddy Seyed, Chris Burns, Mario Costa Sousa, Frank Maurer, and Anthony Tang. 2012. Eliciting Usable Gestures for Multi-display Environments. In Proceedings of the 2012 ACM International Conference on Interactive Tabletops and Surfaces (Cambridge, Massachusetts, USA) (ITS '12). ACM, New York, NY, USA, 41--50. https://doi.org/10.1145/2396636.2396643
[35]
Ying-Chao Tung, Chun-Yen Hsu, Han-Yu Wang, Silvia Chyou, Jhe-Wei Lin, Pei-Jung Wu, Andries Valstar, and Mike Y. Chen. 2015. User-Defined Game Input for Smart Glasses in Public Space. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (Seoul, Republic of Korea) (CHI '15). ACM, New York, NY, USA, 3327--3336. https://doi.org/10.1145/2702123.2702214
[36]
Radu-Daniel Vatavu. 2012. User-defined Gestures for Free-hand TV Control. In Proceedings of the 10th European Conference on Interactive TV and Video (Berlin, Germany) (EuroITV '12). ACM, New York, NY, USA, 45--48. https://doi.org/10.1145/2325616.2325626
[37]
Radu-Daniel Vatavu. 2013. A Comparative Study of User-Defined Handheld vs. Freehand Gestures for Home Entertainment Environments. J. Ambient Intell. Smart Environ., Vol. 5, 2 (March 2013), 187--211.
[38]
Radu-Daniel Vatavu. 2019. The Dissimilarity-Consensus Approach to Agreement Analysis in Gesture Elicitation Studies. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI '19). Association for Computing Machinery, New York, NY, USA, 1--13. https://doi.org/10.1145/3290605.3300454
[39]
Radu-Daniel Vatavu and Jacob O. Wobbrock. 2015. Formalizing Agreement Analysis for Elicitation Studies: New Measures, Significance Test, and Toolkit. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (Seoul, Republic of Korea) (CHI '15). ACM, New York, NY, USA, 1325--1334. https://doi.org/10.1145/2702123.2702223
[40]
Radu-Daniel Vatavu and Ionut-Alexandru Zaiti. 2014. Leap Gestures for TV: Insights from an Elicitation Study. In Proceedings of the ACM International Conference on Interactive Experiences for TV and Online Video (Newcastle Upon Tyne, United Kingdom) (TVX '14). ACM, New York, NY, USA, 131--138. https://doi.org/10.1145/2602299.2602316
[41]
Santiago Villarreal-Narvaez, Jean Vanderdonckt, Radu-Daniel Vatavu, and Jacob O. Wobbrock. 2020. A Systematic Review of Gesture Elicitation Studies: What Can We Learn from 216 Studies?. In Proceedings of the 2020 ACM Designing Interactive Systems Conference (Eindhoven, Netherlands) (DIS '20). Association for Computing Machinery, New York, NY, USA, 855--872. https://doi.org/10.1145/3357236.3395511
[42]
Martin Weigel, Vikram Mehta, and Jürgen Steimle. 2014. More than Touch: Understanding How People Use Skin as an Input Surface for Mobile Computing. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Toronto, Ontario, Canada) (CHI '14). Association for Computing Machinery, New York, NY, USA, 179--188. https://doi.org/10.1145/2556288.2557239
[43]
Jacob O. Wobbrock, Htet Htet Aung, Brandon Rothrock, and Brad A. Myers. 2005. Maximizing the Guessability of Symbolic Input. In CHI '05 Extended Abstracts on Human Factors in Computing Systems (Portland, OR, USA) (CHI EA '05). ACM, New York, NY, USA, 1869--1872. https://doi.org/10.1145/1056808.1057043
[44]
Jacob O. Wobbrock, Meredith Ringel Morris, and Andrew D. Wilson. 2009. User-defined Gestures for Surface Computing. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Boston, MA, USA) (CHI '09). ACM, New York, NY, USA, 1083--1092. https://doi.org/10.1145/1518701.1518866
[45]
Katrin Wolf, Anja Naumann, Michael Rohs, and Jörg Müller. 2011a. Taxonomy of Microinteractions: Defining Microgestures Based on Ergonomic and Scenario-Dependent Requirements. In Proceedings of the 13th IFIP TC 13 International Conference on Human-Computer Interaction - Volume Part I (Lisbon, Portugal) (INTERACT'11). Springer-Verlag, Berlin, Heidelberg, 559--575.
[46]
Katrin Wolf, Anja Naumann, Michael Rohs, and Jörg Müller. 2011b. Taxonomy of Microinteractions: Defining Microgestures Based on Ergonomic and Scenario-Dependent Requirements. In Proceedings of the 13th IFIP TC 13 International Conference on Human-Computer Interaction - Volume Part I (Lisbon, Portugal) (INTERACT'11). Springer-Verlag, Berlin, Heidelberg, 559--575.
[47]
Xing-Dong Yang, Tovi Grossman, Daniel Wigdor, and George Fitzmaurice. 2012. Magic Finger: Always-available Input Through Finger Instrumentation. In Proceedings of the 25th Annual ACM Symposium on User Interface Software and Technology (Cambridge, Massachusetts, USA) (UIST '12). ACM, New York, NY, USA, 147--156. https://doi.org/10.1145/2380116.2380137
[48]
Yang Zhang, Wolf Kienzle, Yanjun Ma, Shiu S. Ng, Hrvoje Benko, and Chris Harrison. 2019. ActiTouch: Robust Touch Detection for On-Skin AR/VR Interfaces. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology (New Orleans, LA, USA) (UIST '19). Association for Computing Machinery, New York, NY, USA, 1151--1159. https://doi.org/10.1145/3332165.3347869
[49]
Yang Zhang, Junhan Zhou, Gierad Laput, and Chris Harrison. 2016. SkinTrack: Using the Body as an Electrical Waveguide for Continuous Finger Tracking on the Skin. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (San Jose, California, USA) (CHI '16). Association for Computing Machinery, New York, NY, USA, 1491--1503. https://doi.org/10.1145/2858036.2858082
[50]
Kening Zhu, Xiaojuan Ma, Haoyuan Chen, and Miaoyin Liang. 2017. Tripartite Effects: Exploring Users? Mental Model of Mobile Gestures under the Influence of Operation, Handheld Posture, and Interaction Space. International Journal of Human--Computer Interaction, Vol. 33, 6 (2017), 443--459. https://doi.org/10.1080/10447318.2016.1275432

Cited By

View all
  • (2024)Enhancement of GUI Display Error Detection Using Improved Faster R-CNN and Multi-Scale Attention MechanismApplied Sciences10.3390/app1403114414:3(1144)Online publication date: 30-Jan-2024
  • (2024)Head 'n Shoulder: Gesture-Driven Biking Through Capacitive Sensing Garments to Innovate Hands-Free InteractionProceedings of the ACM on Human-Computer Interaction10.1145/36765108:MHCI(1-20)Online publication date: 24-Sep-2024
  • (2024)Exploring User-Defined Gestures as Input for Hearables and Recognizing Ear-Level Gestures with IMUsProceedings of the ACM on Human-Computer Interaction10.1145/36765038:MHCI(1-23)Online publication date: 24-Sep-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Proceedings of the ACM on Human-Computer Interaction
Proceedings of the ACM on Human-Computer Interaction  Volume 4, Issue ISS
ISS
November 2020
488 pages
EISSN:2573-0142
DOI:10.1145/3433930
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 04 November 2020
Published in PACMHCI Volume 4, Issue ISS

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. ear-based input
  2. gestures
  3. guessability
  4. think-aloud
  5. user-defined

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)107
  • Downloads (Last 6 weeks)12
Reflects downloads up to 01 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Enhancement of GUI Display Error Detection Using Improved Faster R-CNN and Multi-Scale Attention MechanismApplied Sciences10.3390/app1403114414:3(1144)Online publication date: 30-Jan-2024
  • (2024)Head 'n Shoulder: Gesture-Driven Biking Through Capacitive Sensing Garments to Innovate Hands-Free InteractionProceedings of the ACM on Human-Computer Interaction10.1145/36765108:MHCI(1-20)Online publication date: 24-Sep-2024
  • (2024)Exploring User-Defined Gestures as Input for Hearables and Recognizing Ear-Level Gestures with IMUsProceedings of the ACM on Human-Computer Interaction10.1145/36765038:MHCI(1-23)Online publication date: 24-Sep-2024
  • (2024)Privacy Preserving Release of Mobile Sensor DataProceedings of the 19th International Conference on Availability, Reliability and Security10.1145/3664476.3664519(1-13)Online publication date: 30-Jul-2024
  • (2024)EarSlideProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36435158:1(1-29)Online publication date: 6-Mar-2024
  • (2024)Exploring Uni-manual Around Ear Off-Device Gestures for EarablesProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36435138:1(1-29)Online publication date: 6-Mar-2024
  • (2024)MAF: Exploring Mobile Acoustic Field for Hand-to-Face Gesture InteractionsProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642437(1-20)Online publication date: 11-May-2024
  • (2024)Acoustic-based Alphanumeric Input Interface for Earables2024 33rd International Conference on Computer Communications and Networks (ICCCN)10.1109/ICCCN61486.2024.10637602(1-9)Online publication date: 29-Jul-2024
  • (2023)Brave New GES World: A Systematic Literature Review of Gestures and Referents in Gesture Elicitation StudiesACM Computing Surveys10.1145/363645856:5(1-55)Online publication date: 7-Dec-2023
  • (2023)VibPathProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36108947:3(1-26)Online publication date: 27-Sep-2023
  • Show More Cited By

View Options

Get Access

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media