Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/2971648.2971687acmconferencesArticle/Chapter ViewAbstractPublication PagesubicompConference Proceedingsconference-collections
research-article

Accuracy of interpreting pointing gestures in egocentric view

Published: 12 September 2016 Publication History

Abstract

Communicating spatial information by pointing is ubiquitous in human interactions. With the growing use of head-mounted cameras for collaborative purposes, it is important to assess how accurately viewers of the resulting egocentric videos can interpret pointing acts. We conducted an experiment to compare the accuracy of interpreting four different pointing techniques: hand pointing, head pointing, gaze pointing and hand+gaze pointing. Our results suggest that superimposing the gaze information on the egocentric video can enable viewers to determine pointing targets more accurately and more confidently. Hand pointing performed best when the pointing target was straight ahead and head pointing was the least preferred in terms of ease of interpretation. Our results can inform the design of collaborative applications that make use of the egocentric view.

References

[1]
Deepak Akkil, Poika Isokoski, Jari Kangas, Jussi Rantala, and Roope Raisamo. 2014. TraQuMe: a tool for measuring the gaze tracking quality. In Proceedings of the Symposium on Eye Tracking Research and Applications (ETRA '14), 327--330. http://dx.doi.org/10.1145/2578153.2578192
[2]
Deepak Akkil and Poika Isokoski. 2016. Gaze Augmentation in Egocentric Video Improves Awareness of Intention. In Proceedings of ACM Conference on Human Factors in Computing Systems (CHI '16). http://dx.doi.org/10.1145/2858036.2858127
[3]
Deepak Akkil, Jobin Mathew James, Poika Isokoski, and Jari Kangas. GazeTorch : Enabling Gaze Awareness in Collaborative Physical Tasks. In Proceedings of ACM Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA '16). http://dx.doi.org/10.1145/2851581.2892459
[4]
Ignacio Avellino, Cédric Fleury, and Michel Beaudouin-Lafon. 2015. Accuracy of Deictic Gestures to Support Telepresence on Wall-sized Displays. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15), 2393--2396. http://dx.doi.org/10.1145/2702123.2702448
[5]
Matthias Baldauf, Peter Fröhlich, and Siegfried Hutter. 2010. KIBITZER: a wearable system for eye-gaze-based mobile urban exploration. In Proceedings of the 1st Augmented Human International Conference (AH '10). Article 9, 5 pages. http://dx.doi.org/10.1145/1785455.1785464
[6]
B Biguer, C Prablanc, and M Jeannerod. 1984. The contribution of coordinated eye and head movements in hand pointing accuracy. Experimental brain research. 55, 3: 462--469. http://doi.org/10.1007/BF00235277
[7]
Richard A. Bolt. 1980. "Put-that-there": Voice and gesture at the graphics interface. In Proceedings of the 7th annual conference on Computer graphics and interactive techniques (SIGGRAPH '80). 262--270. http://dx.doi.org/10.1145/800250.807503
[8]
David P. Carey. 2001. Vision research: Losing sight of eye dominance. Current Biology. 1.20. 828--830. http://doi.org/10.1016/S0960-9822(01)00496-1
[9]
Andrea Colaço, Ahmed Kirmani, Hye Soo Yang, NanWei Gong, Chris Schmandt, and Vivek K. Goyal. 2013. Mime: compact, low power 3D gesture sensing for interaction with head mounted displays. In Proceedings of ACM symposium on User interface software and technology (UIST '13). 227--236. http://dx.doi.org/10.1145/2501988.2502042
[10]
Mikael Drugge, Marcus Nilsson, Roland Parviainen, and Peter Parnes. 2004. Experiences of using wearable computers for ambient telepresence and remote interaction. In Proceedings of the 2004 ACM SIGMM workshop on Effective telepresence (ETP '04). 2--11. http://dx.doi.org/10.1145/1026776.1026780
[11]
Pat Dugard. 2014. Randomization tests: A new gold standard? Journal of Contextual Behavioral Science 3, 1: 65--68. http://doi.org/10.1016/j.jcbs.2013.10.001
[12]
Susan R. Fussell, Leslie D. Setlock, and Robert E. Kraut. 2003. Effects of head-mounted and scene-oriented video systems on remote collaboration on physical tasks. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '03). 513--520. http://dx.doi.org/10.1145/642611.642701
[13]
Zenzi M. Griffin and Kathryn Bock. 2000. What the eyes say about speaking. Psychological science. 11, 4. 274--279. http://dx.doi.org/10.1111/1467-9280.00255.
[14]
Denise Henriques and John D. Crawford. 2002. Role of eye, head, and shoulder geometry in the planning of accurate arm movements. Journal of neurophysiology 87, 4: 1677--1685.http://doi.org/10.1152/jn.00509.2001
[15]
Sture Holm. 1979. A simple sequentially rejective multiple test procedure. Scandinavian journal of statistics, 1: 65--70.
[16]
Steven Johnson, Madeleine Gibson, and Bilge Mutlu. 2015. Handheld or Handsfree?: Remote Collaboration via Lightweight Head-Mounted Displays and Handheld Devices. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing (CSCW '15). 1825--1836. http://dx.doi.org/10.1145/2675133.2675176
[17]
Brennan Jones, Anna Witcraft, Scott Bateman, Carman Neustaedter, and Anthony Tang. 2015. Mechanics of Camera Work in Mobile Video Collaboration. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). 957--966. http://dx.doi.org/10.1145/2702123.2702345
[18]
Kasahara, Shunichi, and Jun Rekimoto.2014. JackIn: integrating first-person view with out-of-body vision generation for human-human augmentation. In Proceedings of the 5th Augmented Human International Conference (AH' 14), p. 46. http://dx.doi.org/10.1145/2582051.2582097
[19]
Moritz Kassner, William Patera, and Andreas Bulling. 2014. Pupil: an open source platform for pervasive eye tracking and mobile gaze-based interaction. In Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct Publication (UbiComp '14 Adjunct). 1151--1160. http://dx.doi.org/10.1145/2638728.2641695
[20]
Aarlenne Z. Khan and John D. Crawford. 2001. Ocular dominance reverses as a function of horizontal gaze angle. Vision Research 41, 14: 1743--1748. http://doi.org/10.1016/S0042-6989(01)00079-7
[21]
Aarlenne Z. Khan and John D. Crawford. 2003. Coordinating one hand with two eyes: Optimizing for field of view in a pointing task. Vision Research 43, 4: 409--417. http://doi.org/10.1016/S0042-6989(02)00569-2
[22]
Barry Kollee, Sven Kratz, and Anthony Dunnigan. 2014. Exploring gestural interaction in smart spaces using head mounted devices with ego-centric sensing. In Proceedings of the 2nd ACM symposium on Spatial user interaction (SUI '14). 40--49. http://dx.doi.org/10.1145/2659766.2659781
[23]
Rob Kooper and Blair MacIntyre. 2003. Browsing the Real-World Wide Web: Maintaining Awareness of Virtual Information in an AR Information Space. International Journal of Human-Computer Interaction 16, 3: 425--446. http://doi.org/10.1207/S15327590IJHC1603_3
[24]
Päivi Majaranta and Kari-Jouko Räihä. 2002. Twenty years of eye typing: systems and design issues. In Proceedings of the 2002 symposium on Eye tracking research & applications (ETRA '02). 15--22. http://dx.doi.org/10.1145/507072.507076
[25]
Bernhard Maurer, Sandra Trösterer, Magdalena Gärtner, Martin Wuchse, Axel Baumgartner, Alexander Meschtscherjakov, David Wilfinger, and Manfred Tscheligi. 2014. Shared Gaze in the Car: Towards a Better Driver-Passenger Collaboration. In Adjunct Proceedings of the 6th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI '14). 1--6. http://dx.doi.org/10.1145/2667239.2667274
[26]
Sven Mayer, Katrin Wolf, Stefan Schneegass, and Niels Henze. 2015. Modeling Distant Pointing for Compensating Systematic Displacements. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). 4165--4168.http://dx.doi.org/10.1145/2702123.2702332
[27]
W R Miles. 1930. Ocular dominance in human adults. Journal of General Psychology 3: 412--430. http://dx.doi.org/10.1080/00221309.1930.9918218
[28]
Romy Müller, Jens R. Helmert, Sebastian Pannasch, and Boris M. Velichkovsky. 2013. Gaze transfer in remote cooperation: Is it always helpful to see what your partner is attending to? The Quarterly Journal of Experimental Psychology 66, 7: 1302--1316. http://doi.org/10.1080/17470218.2012.737813
[29]
Shohei Nagai, Shunichi Kasahara, and Jun Rekimoto. 2015. LiveSphere: Sharing the Surrounding Visual Environment for Immersive Experience in Remote Collaboration. In Proceedings of the Ninth International Conference on Tangible, Embedded, and Embodied Interaction (TEI '15). 113--116. http://dx.doi.org/10.1145/2677199.2680549
[30]
Mark B Neider, Xin Chen, Christopher A. Dickinson, Susan E. Brennan, and Gregory J. Zelinsky. 2010. Coordinating spatial referencing using shared gaze. Psychonomic bulletin & review 17, 5: 718--24. http://doi.org/10.3758/PBR.17.5.718
[31]
Jason Procyk, Carman Neustaedter, Carolyn Pang, Anthony Tang, and Tejinder K. Judge. 2014. Exploring video streaming in public settings: shared geocaching over distance using mobile video chat. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '14). 2163-2172. http://dx.doi.org/10.1145/2556288.2557198
[32]
Sonja Rümelin, Chadly Marouane, and Andreas Butz. 2013. Free-hand pointing for identification and interaction with distant objects. In Proceedings of the 5th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI '13). 40--47. http://dx.doi.org/10.1145/2516540.2516556
[33]
Rainer Stiefelhagen and Jie Zhu. 2002. Head orientation and gaze direction in meetings. In CHI '02 Extended Abstracts on Human Factors in Computing Systems (CHI EA '02). 858--859. http://dx.doi.org/10.1145/506443.506634
[34]
Bruce Thomas, Ben Close, John Donoghue, John Squires, Phillip De Bondi, and Wayne Piekarski. 2002. First person indoor/outdoor augmented reality application: ARQuake. Personal and Ubiquitous Computing, 75--86. http://doi.org/10.1007/s007790200007
[35]
Ying-Chao Tung, Chun-Yen Hsu, Han-Yu Wang, Silvia Chyou, Jhe-Wei Lin, Pei-Jung Wu, Andries Valstar, and Mike Y. Chen. 2015. User-Defined Game Input for Smart Glasses in Public Space. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). 3327--3336. http://dx.doi.org/10.1145/2702123.2702214
[36]
Takuya Uemura, Yasuko Arai, and Chiga Shimazaki. 1980. Eye-head coordination during lateral gaze in normal subjects. Octa Otolaryngology 90, 1-6: 191--198. http://doi.org/10.3109/00016488009131715
[37]
Nelson Wong and Carl Gutwin. 2010. Where are you pointing?: the accuracy of deictic pointing in CVEs. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '10). 1029--1038. http://dx.doi.org/10.1145/1753326.1753480
[38]
Xianjun Sam Zheng, Cedric Foucault, Patrik Matos da Silva, Siddharth Dasari, Tao Yang, and Stuart Goose. 2015. Eye-Wearable Technology for Machine Maintenance: Effects of Display Position and Hands-free Operation. In Proceedings of the Human Factors in Computing Systems (CHI '15). 2125--2134. http://dx.doi.org/10.1145/2702123.2702305

Cited By

View all
  • (2023)WorldPoint: Finger Pointing as a Rapid and Natural Trigger for In-the-Wild Mobile InteractionsProceedings of the ACM on Human-Computer Interaction10.1145/36264787:ISS(357-375)Online publication date: 1-Nov-2023
  • (2022)Multimodal Driver Referencing: A Comparison of Pointing to Objects Inside and Outside the VehicleProceedings of the 27th International Conference on Intelligent User Interfaces10.1145/3490099.3511142(483-495)Online publication date: 22-Mar-2022
  • (2022)Pointing, Pairing and Grouping Gesture Recognition in Virtual RealityComputers Helping People with Special Needs10.1007/978-3-031-08648-9_36(313-320)Online publication date: 1-Jul-2022
  • Show More Cited By

Index Terms

  1. Accuracy of interpreting pointing gestures in egocentric view

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    UbiComp '16: Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing
    September 2016
    1288 pages
    ISBN:9781450344616
    DOI:10.1145/2971648
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 12 September 2016

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. accuracy of spatial referencing
    2. collaboration
    3. egocentric video
    4. gaze augmentation
    5. pointing

    Qualifiers

    • Research-article

    Conference

    UbiComp '16

    Acceptance Rates

    UbiComp '16 Paper Acceptance Rate 101 of 389 submissions, 26%;
    Overall Acceptance Rate 764 of 2,912 submissions, 26%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)45
    • Downloads (Last 6 weeks)3
    Reflects downloads up to 13 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2023)WorldPoint: Finger Pointing as a Rapid and Natural Trigger for In-the-Wild Mobile InteractionsProceedings of the ACM on Human-Computer Interaction10.1145/36264787:ISS(357-375)Online publication date: 1-Nov-2023
    • (2022)Multimodal Driver Referencing: A Comparison of Pointing to Objects Inside and Outside the VehicleProceedings of the 27th International Conference on Intelligent User Interfaces10.1145/3490099.3511142(483-495)Online publication date: 22-Mar-2022
    • (2022)Pointing, Pairing and Grouping Gesture Recognition in Virtual RealityComputers Helping People with Special Needs10.1007/978-3-031-08648-9_36(313-320)Online publication date: 1-Jul-2022
    • (2021)Use of Pupil Area and Fixation Maps to Evaluate Visual Behavior of Drivers inside Tunnels at Different Luminance Levels—A Pilot StudyApplied Sciences10.3390/app1111501411:11(5014)Online publication date: 28-May-2021
    • (2021)The Effects of Network Outages on User Experience in Augmented Reality Based Remote Collaboration - An Empirical StudyProceedings of the ACM on Human-Computer Interaction10.1145/34760545:CSCW2(1-27)Online publication date: 18-Oct-2021
    • (2021)Multimodal Fusion Using Deep Learning Applied to Driver's Referencing of Outside-Vehicle Objects2021 IEEE Intelligent Vehicles Symposium (IV)10.1109/IV48863.2021.9575815(1108-1115)Online publication date: 11-Jul-2021
    • (2021)“Point at It with Your Smartphone”: Assessing the Applicability of Orientation Sensing of Smartphones to Operate IoT DevicesHCI International 2021 - Late Breaking Papers: Multimodality, eXtended Reality, and Artificial Intelligence10.1007/978-3-030-90963-5_10(115-131)Online publication date: 11-Nov-2021
    • (2020)Perspective determines the production and interpretation of pointing gesturesPsychonomic Bulletin & Review10.3758/s13423-020-01823-7Online publication date: 15-Oct-2020
    • (2019)WeBuildAIProceedings of the ACM on Human-Computer Interaction10.1145/33592833:CSCW(1-35)Online publication date: 7-Nov-2019
    • (2019)Gallery D.C.: Design Search and Knowledge Discovery through Auto-created GUI Component GalleryProceedings of the ACM on Human-Computer Interaction10.1145/33592823:CSCW(1-22)Online publication date: 7-Nov-2019
    • Show More Cited By

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media