Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

HeadCross: Exploring Head-Based Crossing Selection on Head-Mounted Displays

Published: 18 March 2020 Publication History

Abstract

We propose HeadCross, a head-based interaction method to select targets on VR and AR head-mounted displays (HMD). Using HeadCross, users control the pointer with head movements and to select a target, users move the pointer into the target and then back across the target boundary. In this way, users can select targets without using their hands, which is helpful when users' hands are occupied by other tasks, e.g., while holding the handrails. However, a major challenge for head-based methods is the false positive problems: unintentional head movements may be incorrectly recognized as HeadCross gestures and trigger the selections. To address this issue, we first conduct a user study (Study 1) to observe user behavior while performing HeadCross and identify the behavior differences between HeadCross and other types of head movements. Based on the results, we discuss design implications, extract useful features, and develop the recognition algorithm for HeadCross. To evaluate HeadCross, we conduct two user studies. In Study 2, we compared HeadCross to the dwell-based selection method, button-press method, and mid-air gesture-based method. Two typical target selection tasks (text entry and menu selection) are tested on both VR and AR interfaces. Results showed that compared to the dwell-based method, HeadCross improved the sense of control; and compared to two hand-based methods, HeadCross improved the interaction efficiency and reduced fatigue. In Study 3, we compared HeadCross to three alternative designs of head-only selection methods. Results show that HeadCross was perceived to be significantly faster than the alternatives. We conclude with the discussion on the interaction potential and limitations of HeadCross.

Supplementary Material

yan (yan.zip)
Supplemental movie, appendix, image and software files for, HeadCross: Exploring Head-Based Crossing Selection on Head-Mounted Displays

References

[1]
2018. HTC VIVE Pro website. Website. Retrieved August 27, 2018 from https://www.vive.com/us/product/vive-pro/.
[2]
2018. Microsoft Hololens. Website. Retrieved March 7, 2018 from https://www.microsoft.com/en-us/hololens.
[3]
2019. Number Hunt. Website. Retrieved April 20, 2019 from https://store.steampowered.com/app/851770/Number_Hunt/, annote = Website URL.
[4]
2019. Serious Sam VR: The Last Hope. Website. Retrieved April 20, 2019 from https://store.steampowered.com/app/465240/Serious_Sam_VR_The_Last_Hope/.
[5]
2019. Virtual Army: Revolution. Website. Retrieved April 20, 2019 from https://store.steampowered.com/app/551610/Virtual_Army_Revolution/.
[6]
Johnny Accot and Shumin Zhai. 2002. More Than Dotting the I's --- Foundations for Crossing-based Interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '02). ACM, New York, NY, USA, 73--80. https://doi.org/10.1145/503376.503390
[7]
I Elaine Allen and Christopher A Seaman. 2007. Likert scales and data analyses. Quality progress 40, 7 (2007), 64--65.
[8]
Georg Apitz and François Guimbretière. 2004. CrossY: A Crossing-based Drawing Application. In Proceedings of the 17th Annual ACM Symposium on User Interface Software and Technology (UIST '04). ACM, New York, NY, USA, 3--12. https://doi.org/10.1145/1029632.1029635
[9]
GR Arce. 2005. Nonlinear Signal Processing: A Statistical Approach Wiley: New Jersey.
[10]
Rowel Atienza, Ryan Blonna, Maria Isabel Saludares, Joel Casimiro, and Vivencio Fuentes. 2016. Interaction techniques using head gaze for virtual reality. In 2016 IEEE Region 10 Symposium (TENSYMP). IEEE, 110--114.
[11]
Gilles Bailly, Robert Walter, Jörg Müller, Tongyan Ning, and Eric Lecolinet. 2011. Comparing free hand menu techniques for distant displays using linear, marking and finger-count menus. In Human-Computer Interaction-INTERACT 2011. Springer, 248--262.
[12]
Steffi Beckhaus, Kristopher J Blom, and Matthias Haringer. 2007. ChairIO-the chair-based Interface. Concepts and technologies for pervasive games: a reader for pervasive gaming research 1 (2007), 231--264.
[13]
Jonas Blattgerste, Patrick Renner, and Thies Pfeiffer. 2018. Advantages of Eye-Gaze over Head-Gaze-Based Selection in Virtual and Augmented Reality under Varying Field of Views. In Proceedings of the Workshop on Communication by Gaze Interaction (COGAIN '18). Association for Computing Machinery, New York, NY, USA, Article Article 1, 9 pages. https://doi.org/10.1145/3206343.3206349
[14]
Evren Bozgeyikli, Andrew Raij, Srinivas Katkoori, and Rajiv Dubey. 2016. Point & Teleport Locomotion Technique for Virtual Reality. In Proceedings of the 2016 Annual Symposium on Computer-Human Interaction in Play (CHI PLAY '16). ACM, New York, NY, USA, 205--216. https://doi.org/10.1145/2967934.2968105
[15]
Marcio C. Cabral, Carlos H. Morimoto, and Marcelo K. Zuffo. 2005. On the Usability of Gesture Interfaces in Virtual Reality Environments. In Proceedings of the 2005 Latin American Conference on Human-computer Interaction (CLIHC '05). ACM, New York, NY, USA, 100--108. https://doi.org/10.1145/1111360.1111370
[16]
Eun Kyoung Choe, Kristen Shinohara, Parmit K. Chilana, Morgan Dixon, and Jacob O. Wobbrock. 2009. Exploring the Design of Accessible Goal Crossing Desktop Widgets. In CHI '09 Extended Abstracts on Human Factors in Computing Systems (CHI EA '09). ACM, New York, NY, USA, 3733--3738. https://doi.org/10.1145/1520340.1520563
[17]
Andy Cockburn and Andrew Firth. 2004. Improving the acquisition of small targets. In People and Computers XVIIâĂŤDesigning for Society. Springer, 181--196.
[18]
Douglas A Craig and Hung T Nguyen. 2006. Wireless real-time head movement system using a personal digital assistant (PDA) for control of a power wheelchair. In Engineering in Medicine and Biology Society, 2005. IEEE-EMBS 2005. 27th Annual International Conference of the. IEEE, 772--775.
[19]
Andrew Crossan, John Williamson, Stephen Brewster, and Rod Murray-Smith. 2008. Wrist Rotation for Interaction in Mobile Contexts. In Proceedings of the 10th International Conference on Human Computer Interaction with Mobile Devices and Services (MobileHCI 08). ACM, New York, NY, USA, 435--438. https://doi.org/10.1145/1409240.1409307
[20]
Gerwin de Haan, Eric J. Griffith, and Frits H. Post. 2008. Using the Wii Balance Board™ As a Low-cost VR Interaction Device. In Proceedings of the 2008 ACM Symposium on Virtual Reality Software and Technology (VRST '08). ACM, New York, NY, USA, 289--290. https://doi.org/10.1145/1450579.1450657
[21]
Pierre Dragicevic. 2004. Combining Crossing-based and Paper-based Interaction Paradigms for Dragging and Dropping Between Overlapping Windows. In Proceedings of the 17th Annual ACM Symposium on User Interface Software and Technology (UIST '04). ACM, New York, NY, USA, 193--196. https://doi.org/10.1145/1029632.1029667
[22]
Jinjuan Feng and Andrew Sears. 2004. Using Confidence Scores to Improve Hands-free Speech Based Navigation in Continuous Dictation Systems. ACM Trans. Comput.-Hum. Interact. 11, 4 (Dec. 2004), 329--356. https://doi.org/10.1145/1035575.1035576
[23]
Eric Foxlin. 1996. Inertial head-tracker sensor fusion by a complementary separate-bias Kalman filter. In Virtual Reality Annual International Symposium, 1996., Proceedings of the IEEE 1996. IEEE, 185--194.
[24]
Dmitry O Gorodnichy and Gerhard Roth. 2004. Nouse âĂŸuse your nose as a mouseâĂŹperceptual vision technology for hands-free games and interfaces. Image and Vision Computing 22, 12 (2004), 931--942.
[25]
Hiroyuki Hakoda, Takuro Kuribara, Keigo Shima, Buntarou Shizuki, and Jiro Tanaka. 2015. AirFlip: A double crossing in-air gesture using boundary surfaces of hover zone for mobile devices. In International Conference on Human-Computer Interaction. Springer, 44--53.
[26]
Jibo He, Alex Chaparro, Bobby Nguyen, Rondell Burge, Joseph Crandall, Barbara Chaparro, Rui Ni, and Shi Cao. 2013. Texting While Driving: Is Speech-based Texting Less Risky Than Handheld Texting?. In Proceedings of the 5th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI '13). ACM, New York, NY, USA, 124--130. https://doi.org/10.1145/2516540.2516560
[27]
Shahram Jalaliniya, Diako Mardanbegi, and Thomas Pederson. 2015. MAGIC pointing for eyewear computers. In Proceedings of the 2015 ACM International Symposium on Wearable Computers. ACM, 155--158.
[28]
Shahram Jalaliniya, Diako Mardanbeigi, Thomas Pederson, and Dan Witzner Hansen. 2014. Head and eye movement as pointing modalities for eyewear computers. In 2014 11th International Conference on Wearable and Implantable Body Sensor Networks Workshops. IEEE, 50--53.
[29]
Pei Jia, Huosheng H Hu, Tao Lu, and Kui Yuan. 2007. Head gesture recognition for hands-free control of an intelligent wheelchair. Industrial Robot: An International Journal 34, 1 (2007), 60--68.
[30]
Mikko Kytö, Barrett Ens, Thammathip Piumsomboon, Gun A. Lee, and Mark Billinghurst. 2018. Pinpointing: Precise Head- and Eye-Based Target Selection for Augmented Reality. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). Association for Computing Machinery, New York, NY, USA, Article Paper 81, 14 pages. https://doi.org/10.1145/3173574.3173655
[31]
Mikko Kytö, Barrett Ens, Thammathip Piumsomboon, Gun A. Lee, and Mark Billinghurst. 2018. Pinpointing: Precise Head- and Eye-Based Target Selection for Augmented Reality. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). ACM, New York, NY, USA, Article 81, 14 pages. https://doi.org/10.1145/3173574.3173655
[32]
Edmund LoPresti, David M. Brienza, Jennifer Angelo, Lars Gilbertson, and Jonathan Sakai. 2000. Neck Range of Motion and Use of Computer Head Controls. In Proceedings of the Fourth International ACM Conference on Assistive Technologies (Assets '00). ACM, New York, NY, USA, 121--128. https://doi.org/10.1145/354324.354352
[33]
Yuexing Luo and Daniel Vogel. 2014. Crossing-based Selection with Direct Touch Input. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '14). ACM, New York, NY, USA, 2627--2636. https://doi.org/10.1145/2556288.2557397
[34]
Yuexing Luo and Daniel Vogel. 2015. Pin-and-Cross: A Unimanual Multitouch Technique Combining Static Touches with Crossing Selection. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology (UIST '15). ACM, New York, NY, USA, 323--332. https://doi.org/10.1145/2807442.2807444
[35]
I. Scott MacKenzie and R. William Soukoreff. 2003. Phrase Sets for Evaluating Text Entry Techniques. In CHI '03 Extended Abstracts on Human Factors in Computing Systems (CHI EA '03). ACM, New York, NY, USA, 754--755. https://doi.org/10.1145/765891.765971
[36]
Diako Mardanbegi, Dan Witzner Hansen, and Thomas Pederson. 2012. Eye-based head gestures. In Proceedings of the symposium on eye tracking research and applications. ACM, 139--146.
[37]
Diako Mardanbegi, Benedikt Mayer, Ken Pfeuffer, Shahram Jalaliniya, Hans Gellersen, and Alexander Perzl. 2019. EyeSeeThrough: Unifying Tool Selection and Application in Virtual Environments. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). IEEE, 474--483.
[38]
Sven Mayer, Valentin Schwind, Robin Schweigert, and Niels Henze. 2018. The Effect of Offset Correction and Cursor on Mid-Air Pointing in Real and Virtual Environments. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). ACM, New York, NY, USA, Article 653, 13 pages. https://doi.org/10.1145/3173574.3174227
[39]
Mark R Mine. 1995. Virtual environment interaction techniques. UNC Chapel Hill CS Dept (1995).
[40]
Louis-Philippe Morency and Trevor Darrell. 2006. Head Gesture Recognition in Intelligent Interfaces: The Role of Context in Improving Recognition. In Proceedings of the 11th International Conference on Intelligent User Interfaces (IUI '06). ACM, New York, NY, USA, 32--38. https://doi.org/10.1145/1111449.1111464
[41]
Takashi Nakamura, Shin Takahashi, and Jiro Tanaka. 2008. Double-crossing: A new interaction technique for hand gesture interfaces. In Asia-Pacific Conference on Computer Human Interaction. Springer, 292--300.
[42]
Diederick C Niehorster, Li Li, and Markus Lappe. 2017. The accuracy and precision of position and orientation tracking in the HTC vive virtual reality system for scientific research. i-Perception 8, 3 (2017), 2041669517708205.
[43]
Tomi Nukarinen, Jari Kangas, Oleg Špakov, Poika Isokoski, Deepak Akkil, Jussi Rantala, and Roope Raisamo. 2016. Evaluation of HeadTurn: An Interaction Technique Using the Gaze and Head Turns. In Proceedings of the 9th Nordic Conference on Human-Computer Interaction (NordiCHI '16). ACM, New York, NY, USA, Article 43, 8 pages. https://doi.org/10.1145/2971485.2971490
[44]
Takeshi Okunaka and Yoshinobu Tonomura. 2012. Eyeke: What You Hear is What You See. In Proceedings of the 20th ACM International Conference on Multimedia (MM '12). ACM, New York, NY, USA, 1287--1288. https://doi.org/10.1145/2393347.2396445
[45]
Andriy Pavlovych and Wolfgang Stuerzlinger. 2009. The Tradeoff Between Spatial Jitter and Latency in Pointing Tasks. In Proceedings of the 1st ACM SIGCHI Symposium on Engineering Interactive Computing Systems (EICS '09). ACM, New York, NY, USA, 187--196. https://doi.org/10.1145/1570433.1570469
[46]
Kathrin Probst, David Lindlbauer, Michael Haller, Bernhard Schwartz, and Andreas Schrempf. 2014. A Chair As Ubiquitous Input Device: Exploring Semaphoric Chair Gestures for Focused and Peripheral Interaction. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '14). ACM, New York, NY, USA, 4097--4106. https://doi.org/10.1145/2556288.2557051
[47]
Siddharth S Rautaray and Anupam Agrawal. 2011. Interaction with virtual game through hand gesture recognition. In 2011 International Conference on Multimedia, Signal Processing and Communication Technologies. IEEE, 244--247.
[48]
Gang Ren and Eamonn O'Neill. 2012. 3D marking menu selection with freehand gestures. In 3D User Interfaces (3DUI), 2012 IEEE Symposium on. IEEE, 61--68.
[49]
Sayan Sarcar, Prateek Panwar, and Tuhin Chakraborty. 2013. EyeK: An Efficient Dwell-free Eye Gaze-based Text Entry System. In Proceedings of the 11th Asia Pacific Conference on Computer Human Interaction (APCHI '13). ACM, New York, NY, USA, 215--220. https://doi.org/10.1145/2525194.2525288
[50]
Robin Schweigert, Valentin Schwind, and Sven Mayer. 2019. EyePointing: A Gaze-Based Selection Technique. In Proceedings of Mensch Und Computer 2019 (MuC'19). ACM, New York, NY, USA, 719--723. https://doi.org/10.1145/3340764.3344897
[51]
Ludwig Sidenmark and Hans Gellersen. 2019. Eye&Head: Synergetic Eye and Head Movement for Gaze Pointing and Selection. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology. 1161--1174.
[52]
Oleg Špakov and Päivi Majaranta. 2012. Enhanced gaze interaction using simple head gestures. In Proceedings of the 2012 ACM Conference on Ubiquitous Computing. ACM, 705--710.
[53]
Richard Stoakley, Matthew J. Conway, and Randy Pausch. 1995. Virtual Reality on a WIM: Interactive Worlds in Miniature. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '95). ACM Press/Addison-Wesley Publishing Co., New York, NY, USA, 265--272. https://doi.org/10.1145/223904.223938
[54]
Zhenyu Tang, Chenyu Yan, Sijie Ren, and Huagen Wan. 2016. HeadPager: Page Turning with Computer Vision Based Head Interaction. In Asian Conference on Computer Vision. Springer, 249--257.
[55]
Sam Tregillus. 2016. VR-Drop: Exploring the Use of Walking-in-Place to Create Immersive VR Games. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA '16). ACM, New York, NY, USA, 176--179. https://doi.org/10.1145/2851581.2890374
[56]
Sam Tregillus and Eelke Folmer. 2016. VR-STEP: Walking-in-Place Using Inertial Sensing for Hands Free Navigation in Mobile VR Environments. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI '16). ACM, New York, NY, USA, 1250--1255. https://doi.org/10.1145/2858036.2858084
[57]
Javier Varona, Cristina Manresa-Yee, and Francisco J Perales. 2008. Hands-free vision-based interface for computer accessibility. Journal of Network and Computer Applications 31, 4 (2008), 357--374.
[58]
Jia Wang and Robert W. Lindeman. 2011. Silver Surfer: A System to Compare Isometric and Elastic Board Interfaces for Locomotion in VR. In Proceedings of the 2011 IEEE Symposium on 3D User Interfaces (3DUI '11). IEEE Computer Society, Washington, DC, USA, 121--122. http://dl.acm.org/citation.cfm?id=2013881.2014229
[59]
Wenchang Xu, Chun Yu, Jie Liu, and Yuanchun Shi. 2015. RegionalSliding: Facilitating small target selection with marking menu for one-handed thumb use on touchscreen-based mobile devices. Pervasive and Mobile Computing 17 (2015), 63--78.
[60]
Xuhai Xu, Alexandru Dancu, Pattie Maes, and Suranga Nanayakkara. 2018. Hand Range Interface: Information Always at Hand with a Body-centric Mid-air Input Surface. In Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI '18). ACM, New York, NY, USA, Article 5, 12 pages. https://doi.org/10.1145/3229434.3229449
[61]
Xuhai Xu, Chun Yu, Anind K. Dey, and Jennifer Mankoff. 2019. Clench Interface: Novel Biting Input Techniques. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19). Association for Computing Machinery, New York, NY, USA, Article Paper 275, 12 pages. https://doi.org/10.1145/3290605.3300505
[62]
Xuhai Xu, Chun Yu, Yuntao Wang, and Yuanchun Shi. 2020. Recognizing Unintentional Touch on Interactive Tabletop. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 4, 1 (March 2020), 27. https://doi.org/10.1145/3381011
[63]
Yukang Yan, Chun Yu, Xiaojuan Ma, Shuai Huang, Hasan Iqbal, and Yuanchun Shi. 2018. Eyes-Free Target Acquisition in Interaction Space Around the Body for Virtual Reality. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). ACM, New York, NY, USA, Article 42, 13 pages. https://doi.org/10.1145/3173574.3173616
[64]
Yukang Yan, Chun Yu, Xin Yi, and Yuanchun Shi. 2018. HeadGesture: Hands-Free Input Approach Leveraging Head Movements for HMD Devices. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2, 4, Article 198 (Dec. 2018), 23 pages. https://doi.org/10.1145/3287076
[65]
Chuang-Wen You, Yung-Huan Hsieh, and Wen-Huang Cheng. 2012. AttachedShock: facilitating moving targets acquisition on augmented reality devices using goal-crossing actions. In Proceedings of the 20th ACM international conference on Multimedia. ACM, 1141--1144.
[66]
Chun Yu, Yizheng Gu, Zhican Yang, Xin Yi, Hengliang Luo, and Yuanchun Shi. 2017. Tap, Dwell or Gesture?: Exploring Head-Based Text Entry Techniques for HMDs. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI '17). ACM, New York, NY, USA, 4479--4488. https://doi.org/10.1145/3025453.3025964

Cited By

View all
  • (2024)The Fourth Workshop on Multiple Input Modalities and Sensations for VR/AR Interactions (MIMSVAI)Companion of the 2024 on ACM International Joint Conference on Pervasive and Ubiquitous Computing10.1145/3675094.3677570(988-991)Online publication date: 5-Oct-2024
  • (2024)Body Language for VUIs: Exploring Gestures to Enhance Interactions with Voice User InterfacesProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3660691(133-150)Online publication date: 1-Jul-2024
  • (2024)Task and Environment-Aware Virtual Scene Rearrangement for Enhanced Safety in Virtual RealityIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2024.337211530:5(2517-2526)Online publication date: 4-Mar-2024
  • Show More Cited By

Index Terms

  1. HeadCross: Exploring Head-Based Crossing Selection on Head-Mounted Displays

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
    Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies  Volume 4, Issue 1
    March 2020
    1006 pages
    EISSN:2474-9567
    DOI:10.1145/3388993
    Issue’s Table of Contents
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 18 March 2020
    Published in IMWUT Volume 4, Issue 1

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. crossing selection
    2. hands-free selection
    3. head-based interaction

    Qualifiers

    • Research-article
    • Research
    • Refereed

    Funding Sources

    • the Natural Science Foundation of China
    • the National Key Research and Development Plan

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)112
    • Downloads (Last 6 weeks)25
    Reflects downloads up to 02 Oct 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)The Fourth Workshop on Multiple Input Modalities and Sensations for VR/AR Interactions (MIMSVAI)Companion of the 2024 on ACM International Joint Conference on Pervasive and Ubiquitous Computing10.1145/3675094.3677570(988-991)Online publication date: 5-Oct-2024
    • (2024)Body Language for VUIs: Exploring Gestures to Enhance Interactions with Voice User InterfacesProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3660691(133-150)Online publication date: 1-Jul-2024
    • (2024)Task and Environment-Aware Virtual Scene Rearrangement for Enhanced Safety in Virtual RealityIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2024.337211530:5(2517-2526)Online publication date: 4-Mar-2024
    • (2024)Voice-Augmented Virtual Reality Interface for Serious Games2024 IEEE Conference on Games (CoG)10.1109/CoG60054.2024.10645616(1-8)Online publication date: 5-Aug-2024
    • (2024)HCI Research and Innovation in China: A 10-Year PerspectiveInternational Journal of Human–Computer Interaction10.1080/10447318.2024.232385840:8(1799-1831)Online publication date: 22-Mar-2024
    • (2024)A real-time camera-based gaze-tracking system involving dual interactive modes and its application in gamingMultimedia Systems10.1007/s00530-023-01204-930:1Online publication date: 16-Jan-2024
    • (2024)Phygital Paper CraftingExtended Reality10.1007/978-3-031-71707-9_11(172-180)Online publication date: 11-Sep-2024
    • (2024)Exploration of User Experience in Virtual Reality Environment. A Systematic ReviewDigital Human Modeling and Applications in Health, Safety, Ergonomics and Risk Management10.1007/978-3-031-61060-8_23(320-338)Online publication date: 29-Jun-2024
    • (2023)MazeVR: Immersion and Interaction Using Google Cardboard and Continuous Gesture Recognition on SmartwatchesProceedings of the 28th International ACM Conference on 3D Web Technology10.1145/3611314.3615912(1-5)Online publication date: 9-Oct-2023
    • (2023)The Third Workshop on Multiple Input Modalities and Sensations for VR/AR Interactions (MIMSVAI)Adjunct Proceedings of the 2023 ACM International Joint Conference on Pervasive and Ubiquitous Computing & the 2023 ACM International Symposium on Wearable Computing10.1145/3594739.3605105(769-772)Online publication date: 8-Oct-2023
    • Show More Cited By

    View Options

    Get Access

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media