Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3281505.3281541acmconferencesArticle/Chapter ViewAbstractPublication PagesvrstConference Proceedingsconference-collections
research-article

Performer vs. observer: whose comfort level should we consider when examining the social acceptability of input modalities for head-worn display?

Published: 28 November 2018 Publication History

Abstract

The popularity of head-worn displays (HWD) technologies such as Virtual Reality (VR) and Augmented Reality (AR) headsets is growing rapidly. To predict their commercial success, it is essential to understand the acceptability of these new technologies, along with new methods to interact with them. In this vein, the evaluation of social acceptability of interactions with these technologies has received significant attention, particularly from the performer's (i.e., user's) viewpoint. However, little work has considered social acceptability concerns from observers' (i.e., spectators') perspective. Although HWDs are designed to be personal devices, interacting with their interfaces are often quite noticeable, making them an ideal platform to contrast performer and observer perspectives on social acceptability. Through two studies, this paper contrasts performers' and observers' perspectives of social acceptability interactions with HWDs under different social contexts. Results indicate similarities as well as differences, in acceptability, and advocate for the importance of including both perspectives when exploring social acceptability of emerging technologies. We provide guidelines for understanding social acceptability specifically from the observers' perspective, thus complementing our current practices used for understanding the acceptability of interacting with these devices.

References

[1]
David Ahlström, Khalad Hasan, and Pourang Irani. 2014. Are You Comfortable Doing That?: Acceptance Studies of Around-device Gestures in and for Public Settings. In Proceedings of the 16th International Conference on Human-computer Interaction with Mobile Devices & Services (MobileHCI '14). ACM, New York, NY, USA, 193--202.
[2]
Fouad Alallah, Ali Neshati, Nima Sheibani, Yumiko Sakamoto, Andrea Bunt, Pourang Irani, and Khalad Hasan. 2018. Crowdsourcing vs Laboratory-Style Social Acceptability Studies?: Examining the Social Acceptability of Spatial User Interactions for Head-Worn Displays. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). ACM, New York, NY, USA, Article 310, 7 pages.
[3]
Jarvis J. Arthur, Randall E. Bailey, Steven P. Williams, Lawrence J. Prinzel, Kevin J. Shelton, Denise R. Jones, and Vincent Houston. 2015. A review of head-worn display research at NASA Langley Research Center. In Proceedings of Spie - the International Society For Optical Engineering. 9470 -- 9470.
[4]
Jarvis (Trey) J. Arthur III, Lawrence J. Prinzel III, Kevin J. Shelton, Lynda J. Kramer, Steven P. Williams, Randall E. Bailey, and Robert M. Norman. 2009. Synthetic Vision Enhanced Surface Operations With Head-Worn Display for Commercial Aircraft. The International Journal of Aviation Psychology 19, 2 (2009), 158--181.
[5]
Florian Bemmann. 2015. User Preference for Smart Glass Interaction. (2015), 3--5.
[6]
Jacob Cohen. 1988. A power primer. Psychological Bulletin 112(1) (1988), 155--159.
[7]
Andrea Colaço, Ahmed Kirmani, Hye Soo Yang, Nan-Wei Gong, Chris Schmandt, and Vivek K. Goyal. 2013. Mime: Compact, Low Power 3D Gesture Sensing for Interaction with Head Mounted Displays. In Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology (UIST '13). ACM, New York, NY, USA, 227--236.
[8]
Tamara Denning, Zakariya Dehlawi, and Tadayoshi Kohno. 2014. In Situ with Bystanders of Augmented Reality Glasses: Perspectives on Recording and Privacy-mediating Technologies. In Proceedings of the 32Nd Annual ACM Conference on Human Factors in Computing Systems (CHI '14). ACM, New York, NY, USA, 2377--2386.
[9]
Barrett Ens, Ahmad Byagowi, Teng Han, Juan David Hincapié-Ramos, and Pourang Irani. 2016. Combining Ring Input with Hand Tracking for Precise, Natural Interaction with Spatial Analytic Interfaces. In Proceedings of the 2016 Symposium on Spatial User Interaction (SUI '16). ACM, New York, NY, USA, 99--102.
[10]
Barrett M. Ens, Rory Finnegan, and Pourang P. Irani. 2014. The Personal Cockpit: A Spatial Interface for Effective Task Switching on Head-worn Displays. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI 14). ACM, New York, NY, USA, 3171--3180.
[11]
Euan Freeman, Stephen Brewster, and Vuokko Lantz. 2014. Towards Usable and Acceptable Above-device Interactions. In Proceedings of the 16th International Conference on Human-computer Interaction with Mobile Devices & Services (MobileHCI '14). ACM, New York, NY, USA, 459--464.
[12]
G. P. Gowdham and N. Balasubramanian. 2015. A Review on the Google Glass Technology. International Journal of Emerging Technology in Computer Science & Electronics (IJETCSE '15) 12, 11 (Feb. 2015), 263--269. http://www.ijetcse.com/issues/?volume_id=22&paging=3
[13]
Caroline Jay and Roger Hubbold. 2003. Amplifying Head Movements with Head-Mounted Displays. Presence 12, 3 (June 2003), 268--276.
[14]
Brett Jones, Rajinder Sodhi, David Forsyth, Brian Bailey, and Giuliano Maciocci. 2012. Around Device Interaction for Multiscale Navigation. In Proceedings of the 14th International Conference on Human-computer Interaction with Mobile Devices and Services (MobileHCI '12). ACM, New York, NY, USA, 83--92.
[15]
Wolf Kienzle and Ken Hinckley. 2014. LightRing: Always-available 2D Input on Any Surface. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology (UIST '14). ACM, New York, NY, USA, 157--160.
[16]
Marion Koelle, Abdallah El Ali, Vanessa Cobus, Wilko Heuten, and Susanne CJ Boll. 2017. All About Acceptability?: Identifying Factors for the Adoption of Data Glasses. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI '17). ACM, New York, NY, USA, 295--300.
[17]
Marion Koelle, Matthias Kranz, and Andreas Möller. 2015. Don'T Look at Me That Way!: Understanding User Attitudes Towards Data Glasses Usage. In Proceedings of the 17th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI '15). ACM, New York, NY, USA, 362--372.
[18]
Barry Kollee, Sven Kratz, and Anthony Dunnigan. 2014. Exploring Gestural Interaction in Smart Spaces Using Head Mounted Devices with Ego-centric Sensing. In Proceedings of the 2Nd ACM Symposium on Spatial User Interaction (SUI '14). ACM, New York, NY, USA, 40--49.
[19]
Arun Kulshreshth and Joseph J. LaViola. 2014. Exploring the Usefulness of Finger-based 3D Gesture Menu Selection. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '14). ACM, New York, NY, USA, 1093--1102.
[20]
Lik-Hang Lee and Pan Hui. 2017. Interaction Methods for Smart Glasses. CoRR abs/1707.09728 (2017). http://arxiv.org/abs/1707.09728
[21]
David Liu, Simon A. Jenkins, and Penelope M. Sanderson. 2009. Clinical Implementation of a Head-Mounted Display of Patient Vital Signs. In Proceedings of the International Symposiun on Wearable Computers. 47--54.
[22]
Andrés Lucero and Akos Vetek. 2014. NotifEye: Using Interactive Glasses to Deal with Notifications While Walking in Public. In Proceedings of the 11th Conference on Advances in Computer Entertainment Technology (ACE '14). ACM, New York, NY, USA, Article 17, 10 pages.
[23]
Zhihan Lv, Liangbing Feng, Haibo Li, and Shengzhong Feng. 2014. Hand-free Motion Interaction on Google Glass. In Proceedings of SIGGRAPH Asia 2014 Mobile Graphics and Interactive Applications (SA '14). ACM, New York, NY, USA, Article 21, 1 pages.
[24]
Meethu Malu and Leah Findlater. 2015. Personalized, Wearable Control of a Head-mounted Display for Users with Upper Body Motor Impairments. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM, New York, NY, USA, 221--230.
[25]
Calkin S. Montero, Jason Alexander, Mark T. Marshall, and Sriram Subramanian. 2010. Would You Do That?: Understanding Social Acceptance of Gestural Interfaces. In Proceedings of the 12th International Conference on Human Computer Interaction with Mobile Devices and Services (MobileHCI '10). ACM, New York, NY, USA, 275--278.
[26]
Florian Müller, Niloofar Dezfuli, Max Mühlhäuser, Martin Schmitz, and Mohammadreza Khalilbeigi. 2015. Palm-based Interaction with Head-mounted Displays. In Proceedings of the 17th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct (MobileHCI '15). ACM, New York, NY, USA, 963--965.
[27]
D. F. Ormerod, B. Ross, and A. Naluai-Cecchini. 2002. Use of a see-through head-worn display of patient monitoring data to enhance anesthesiologists' response to abnormal clinical events. In Proceedings. Sixth International Symposium on Wearable Computers,. 131--132.
[28]
Halley P. Profita, James Clawson, Scott Gilliland, Clint Zeagler, Thad Starner, Jim Budd, and Ellen Yi-Luen Do. 2013. Don'T Mind Me Touching My Wrist: A Case Study of Interacting with On-body Technology in Public. In Proceedings of the 2013 International Symposium on Wearable Computers (ISWC '13). ACM, New York, NY, USA, 89--96.
[29]
Stuart Reeves, Steve Benford, Claire O'Malley, and Mike Fraser. 2005. Designing the Spectator Experience. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '05). ACM, New York, NY, USA, 741--750.
[30]
Julie Rico and Stephen Brewster. 2010. Gesture and Voice Prototyping for Early Evaluations of Social Acceptability in Multimodal Interfaces. In Proceedings of the International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction (ICMI-MLMI '10). ACM, New York, NY, USA, Article 16, 9 pages.
[31]
Julie Rico and Stephen Brewster. 2010. Usable Gestures for Mobile Interfaces: Evaluating Social Acceptability. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '10). ACM, New York, NY, USA, 887--896.
[32]
Sami Ronkainen, Jonna Häkkilä, Saana Kaleva, Ashley Colley, and Jukka Linjama. 2007. Tap Input As an Embedded Interaction Method for Mobile Devices. In Proceedings of the 1st International Conference on Tangible and Embedded Interaction (TEI '07). ACM, New York, NY, USA, 263--270.
[33]
Marcos Serrano, Barrett M. Ens, and Pourang P. Irani. 2014. Exploring the Use of Hand-to-face Input for Interacting with Head-worn Displays. In Proceedings of the 32Nd Annual ACM Conference on Human Factors in Computing Systems (CHI '14). ACM, New York, NY, USA, 3181--3190.
[34]
Allen G. Taylor. 2016. Develop Microsoft HoloLens Apps Now. Apress, New York City, MA.
[35]
Lisa C. Thomas. 2009. Evaluation of the Use of a Head Worn Display (HWD) for Flight Support in the Commercial Flight Deck. SAE International Journal of Aerospace-V118-1EJ 2 (11 2009), 109--114.
[36]
Takumi Toyama, Daniel Sonntag, Andreas Dengel, Takahiro Matsuda, Masakazu Iwamura, and Koichi Kise. 2014. A Mixed Reality Head-mounted Text Translation System Using Eye Gaze Input. In Proceedings of the 19th International Conference on Intelligent User Interfaces (IUI '14). ACM, New York, NY, USA, 329--334.
[37]
Ying-Chao Tung, Chun-Yen Hsu, Han-Yu Wang, Silvia Chyou, Jhe-Wei Lin, Pei-Jung Wu, Andries Valstar, and Mike Y. Chen. 2015. User-Defined Game Input for Smart Glasses in Public Space. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM, New York, NY, USA, 3327--3336.
[38]
Cheng-Yao Wang, Wei-Chen Chu, Po-Tsung Chiu, Min-Chieh Hsiu, Yih-Harn Chiang, and Mike Y. Chen. 2015. PalmType: Using Palms As Keyboards for Smart Glasses. In Proceedings of the 17th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI '15). ACM, New York, NY, USA, 153--160.
[39]
Brian Winston. 1998. Media Technology and Society: A History: from the Telegraph to the Internet. Routledge. https://books.google.com.sa/books?id=TZOF_1GZRmYC

Cited By

View all
  • (2024)The Impact of Near-Future Mixed Reality Contact Lenses on Users' Lives via an Immersive Speculative Enactment and Focus GroupsProceedings of the 2024 ACM Symposium on Spatial User Interaction10.1145/3677386.3682099(1-13)Online publication date: 7-Oct-2024
  • (2024)TOM: A Development Platform For Wearable Intelligent AssistantsCompanion of the 2024 on ACM International Joint Conference on Pervasive and Ubiquitous Computing10.1145/3675094.3678382(837-843)Online publication date: 5-Oct-2024
  • (2024)Demonstrating TOM: A Development Platform For Wearable Intelligent AssistantsCompanion of the 2024 on ACM International Joint Conference on Pervasive and Ubiquitous Computing10.1145/3675094.3677551(214-219)Online publication date: 5-Oct-2024
  • Show More Cited By

Index Terms

  1. Performer vs. observer: whose comfort level should we consider when examining the social acceptability of input modalities for head-worn display?

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    VRST '18: Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology
    November 2018
    570 pages
    ISBN:9781450360869
    DOI:10.1145/3281505
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 28 November 2018

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. HWDs
    2. augmented reality
    3. input modalities
    4. social acceptance

    Qualifiers

    • Research-article

    Funding Sources

    • Natural Sciences and Engineering Research Council (NSERC)

    Conference

    VRST '18

    Acceptance Rates

    Overall Acceptance Rate 66 of 254 submissions, 26%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)70
    • Downloads (Last 6 weeks)9
    Reflects downloads up to 22 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)The Impact of Near-Future Mixed Reality Contact Lenses on Users' Lives via an Immersive Speculative Enactment and Focus GroupsProceedings of the 2024 ACM Symposium on Spatial User Interaction10.1145/3677386.3682099(1-13)Online publication date: 7-Oct-2024
    • (2024)TOM: A Development Platform For Wearable Intelligent AssistantsCompanion of the 2024 on ACM International Joint Conference on Pervasive and Ubiquitous Computing10.1145/3675094.3678382(837-843)Online publication date: 5-Oct-2024
    • (2024)Demonstrating TOM: A Development Platform For Wearable Intelligent AssistantsCompanion of the 2024 on ACM International Joint Conference on Pervasive and Ubiquitous Computing10.1145/3675094.3677551(214-219)Online publication date: 5-Oct-2024
    • (2024)Lipwatch: Enabling Silent Speech Recognition on Smartwatches using Acoustic SensingProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36596148:2(1-29)Online publication date: 15-May-2024
    • (2024)Digital Eyes: Social Implications of XR EyeSightProceedings of the 30th ACM Symposium on Virtual Reality Software and Technology10.1145/3641825.3689526(1-2)Online publication date: 9-Oct-2024
    • (2024)Ethical Implications of Pervasive Augmented RealityAdjunct Proceedings of the 26th International Conference on Mobile Human-Computer Interaction10.1145/3640471.3686641(1-3)Online publication date: 21-Sep-2024
    • (2024)Demonstrating TOM: A Development Platform for Wearable Intelligent Assistants in Daily ActivitiesAdjunct Proceedings of the 26th International Conference on Mobile Human-Computer Interaction10.1145/3640471.3680445(1-6)Online publication date: 21-Sep-2024
    • (2024)PANDALens: Towards AI-Assisted In-Context Writing on OHMD During TravelsProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642320(1-24)Online publication date: 11-May-2024
    • (2024)Make Interaction Situated: Designing User Acceptable Interaction for Situated Visualization in Public EnvironmentsProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642049(1-21)Online publication date: 11-May-2024
    • (2024)PalmSpaceInternational Journal of Human-Computer Studies10.1016/j.ijhcs.2024.103219184:COnline publication date: 1-Apr-2024
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media