Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Automated Usability Evaluation of Virtual Reality Applications

Published: 24 April 2019 Publication History

Abstract

Virtual reality (VR) and VR applications have reached the end-user and, hence, the demands on usability, also for novel applications, have increased. This situation requires VR usability evaluation methods that can be applied quickly, even after a first release of an application. In this article, we describe such an approach, which is fully automated and does not ask users to perform predefined tasks in a fixed test setting. Instead, it works on recordings of the actual usage of a VR application from which it generates task trees. Afterwards, it analyzes these task trees to search for usability smells, i.e., user behavior indicating usability issues. Our approach provides detailed descriptions of the usability issues that have been found and how they can be solved. We performed a large case study to evaluate our approach and show that it is capable of correctly identifying usability issues. Although our approach is applicable for different VR interaction modalities, such as gaze, controller, or hand interaction, it also has limitations. For example, it can detect diverse issues related to user efficiency, but specific misunderstandings of users cannot be uncovered.

References

[1]
1998. ISO 9241-11: Ergonomic requirements for office work with visual display terminals (VDTs) -- Part 11: Guidance on usability (ISO 9241-11:1998).
[2]
2018. VRTK - Virtual Reality Toolkit. Retrieved 1, 2018 from https://vrtoolkit.readme.io/.
[3]
Diogo Almeida, José Creissac Campos, João Saraiva, and João Carlos Silva. 2015. Towards a catalog of usability smells. In Proceedings of the Volume I: Artificial Intelligence and Agents, Distributed Systems, and Information Systems (ACM SAC’15). ACM, 175--181.
[4]
Saleema Amershi, Jalal Mahmud, Jeffrey Nichols, Tessa Lau, and German Attanasio Ruiz. 2013. LiveAction: Automating web task model generation. ACM Trans. Interact. Intell. Syst. 3, 3, Article 14 (Oct. 2013), 23 pages.
[5]
Richard Atterer. 2008. Usability Tool Support for Model-Based Web Development. dissertation. http://nbn-resolving.de/urn:nbn:de:bvb:19-92963.
[6]
Fiora T. W. Au, Simon Baker, Ian Warren, and Gillian Dobbie. 2008. Automated usability testing framework. In Proceedings of the 9th Conference on Australasian User Interface - Volume 76 (AUIC’08). Australian Computer Society, Inc., Darlinghurst, Australia, Australia, 55--64. http://dl.acm.org/citation.cfm?id=1378337.1378349.
[7]
Rebecca Baker. 2014. Remote Usability Testing: Thinking Outside the Lab. Retrieved 11, 2017 from http://www.uxmatters.com/mt/archives/2014/04/remote-usability-testing-thinking-outside-the-lab.php.
[8]
S. Baker, F. Au, G. Dobbie, and I. Warren. 2008. Automated usability testing using HUI analyzer. In Proceedings of the 19th Australian Conference on Software Engineering (ASWEC’08). 579--588.
[9]
Aurora Bedford. 2015. Don’t Prioritize Efficiency Over Expectations. Retrieved 9, 2017 from http://www.nngroup.com/articles/efficiency-vs-expectations/.
[10]
Aurora Bedford. 2015. No More Pogo Sticking: Protect Users from Wasted Clicks. Retrieved 9, 2017 from http://www.nngroup.com/articles/pogo-sticking/.
[11]
Daniel J. Benjamin, James O. Berger, Magnus Johannesson, Brian A. Nosek, Eric-Jan Wagenmakers, Richard Berk, Kenneth A. Bollen, Björn Brembs, Lawrence Brown, Colin Camerer, et al. 2017. Redefine statistical significance. Nat. Hum. Behav. (2017).
[12]
Nicholas Benson. 2017. Interaction Engine. Retrieved 9, 2017 from https://github.com/leapmotion/UnityModules/wiki/Interaction-Engine.
[13]
Bootstrap core team and contributors. 2017. Bootstrap. Retrieved 11, 2017 from https://getbootstrap.com/.
[14]
Doug A. Bowman, Joseph L. Gabbard, and Deborah Hix. 2002. A survey of usability evaluation in virtual environments: Classification and comparison of methods. Presence: Teleoper. Virtual Environ. 11, 4 (Aug. 2002), 404--424.
[15]
Paolo Burzacca and Fabio Paternò. 2013. Remote usability evaluation of mobile web applications. In Human-Computer Interaction. Human-Centred Design Approaches, Methods, Tools, and Environments, Masaaki Kurosu (Ed.). Lecture Notes in Computer Science, Vol. 8004. Springer Berlin Heidelberg, 241--248.
[16]
Tonio Carta, Fabio Paternò, and Vagner Santana. 2011. Support for remote usability evaluation of web mobile applications. In Proceedings of the 29th ACM International Conference on Design of Communication (SIGDOC’11). ACM, New York, NY, 129--136.
[17]
Cátedra SAES de la Universidad de Murcia. 2014. OHT Plus: An application framework for non-intrusive usability testing tools. Retrieved 6, 2014 from http://www.catedrasaes.org/wiki/OHTPlus.
[18]
Selem Charfi, Houcine Ezzedine, Christophe Kolski, and Faouzi Moussa. 2011. Towards an automatic analysis of interaction data for HCI evaluation: Application to a transport network supervision system. In Proceedings of the 14th International Conference on Human-Computer Interaction: Design and Development Approaches - Volume Part I (HCII’11). Springer-Verlag, Berlin, Heidelberg, 175--184. http://dl.acm.org/citation.cfm?id=2022384.2022407.
[19]
HTC Corporation. 2017. Vive. Retrieved 9, 2017 from https://www.vive.com/.
[20]
Allen Cypher. 1991. EAGER: Programming repetitive tasks by example. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’91). ACM, New York, NY, 33--39.
[21]
Vagner Figuerêdo de Santana and Maria Cecília Calani Baranauskas. 2015. WELFIT: A remote evaluation tool for identifying Web usage patterns through client-side logging. Int. J. Hum.-Comput. Studies 76 (2015), 40--49.
[22]
Deloitte. 2016. Head Mounted Displays in deutschen Unternehmen - Ein Virtual, Augmented und Mixed Reality Check. Retrieved 11, 2017 from http://www2.deloitte.com/de/de/pages/technology-media-and-telecommunications/articles/head-mounted-displays-in-deutschen-unternehmen.html.
[23]
Alexiei Dingli and Justin Mifsud. 2011. USEFul: A framework to mainstream web site usability through automated evaluation. Int. J. Hum. Comput. Interact. 2, 1 (2011), 10--30. http://cscjournals.org/csc/manuscript/Journals/IJHCI/volume2/Issue1/IJHCI-19.pdf.
[24]
Ralf Dörner, Wolfgang Broll, Paul Grimm, and Bernhard Jung (Eds.). 2013. Virtual und Augmented Reality (VR / AR): Grundlagen und Methoden der Virtuellen und Augmentierten Realität. Springer Berlin Heidelberg, Berlin, Heidelberg.
[25]
Arianna D’Ulizia, Fernando Ferri, and Patrizia Grifoni. 2011. A survey of grammatical inference methods for natural language learning. Artif. Intell. Rev. 36, 1 (Jun. 2011), 1--27.
[26]
Sebastian Feuerstack, Marco Blumendorf, Maximilian Kern, Michael Kruppa, Michael Quade, Mathias Runge, and Sahin Albayrak. 2008. Automated usability evaluation during model-based interactive system development. In Proceedings of the 2nd Conference on Human-Centered Software Engineering and 7th International Workshop on Task Models and Diagrams (HCSE-TAMODIA’08). Springer-Verlag, Berlin, Heidelberg, 134--141.
[27]
FFsplit. 2017. FFSPLIT. Retrieved 9, 2017 from http://www.ffsplit.com.
[28]
Software Engineering for Distributed System. 2017. AutoQUEST -- Testing, Analysing, and Observing Event-driven Software. Retrieved 9, 2017 from https://autoquest.informatik.uni-goettingen.de/.
[29]
Peter Géczy, Noriaki Izumi, Shotaro Akaho, and Kôiti Hasida. 2007. Usability analysis framework based on behavioral segmentation. In E-Commerce and Web Technologies, Giuseppe Psaila and Roland Wagner (Eds.). Lecture Notes in Computer Science, Vol. 4655. Springer Berlin Heidelberg, 35--45.
[30]
Steven Gomez and David Laidlaw. 2012. Modeling task performance for a crowd of users from interaction histories. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’12). ACM, New York, NY, 2465--2468.
[31]
Google. 2017. Google Analytics. Retrieved 12, 2017 from http://www.google.com/analytics/.
[32]
Google. 2018. Test auf Optimierung für Mobilgeräte. Retrieved 1, 2018 from https://search.google.com/test/mobile-friendly.
[33]
Google LLC. 2017. Android Developers. Retrieved 11, 2017 from https://developer.android.com/index.html.
[34]
Julián Grigera, Alejandra Garrido, and José Matías Rivero. 2014. A tool for detecting bad usability smells in an automatic way. In Web Engineering, Sven Casteleyn, Gustavo Rossi, and Marco Winckler (Eds.). Lecture Notes in Computer Science, Vol. 8541. Springer International Publishing, 490--493.
[35]
Julián Grigera, Alejandra Garrido, José Matías Rivero, and Gustavo Rossi. 2017. Automatic detection of usability smells in web applications. Int. J. Hum.-Comput. Studies 97 (2017), 129--148.
[36]
Karl Grooves. 2007. The Limitations of Server Log Files for Usability Analysis. Retrieved 6, 2015 from http://boxesandarrows.com/the-limitations-of-server-log-files-for-usability-analysis/.
[37]
Patrick Harms. 2016. Automated Field Usability Evaluation Using Generated Task Trees. Ph.D. Dissertation. http://hdl.handle.net/11858/00-1735-0000-0028-8684-1.
[38]
Patrick Harms and Jens Grabowski. 2014. Usage-based automatic detection of usability smells. In Human-Centered Software Engineering, Stefan Sauer, Cristian Bogdan, Peter Forbrig, Regina Bernhaupt, and Marco Winckler (Eds.). Lecture Notes in Computer Science, Vol. 8742. Springer Berlin Heidelberg, 217--234.
[39]
Patrick Harms and Jens Grabowski. 2015. Consistency of task trees generated from website usage traces. In Proceedings of the 17th International Conference on System Design Languages (SDL’15). Springer Berlin Heidelberg, 8.
[40]
Patrick Harms, Steffen Herbold, and Jens Grabowski. 2014. Extended trace-based task tree generation. Int. J. Adv. Intel. Syst. 7, 3, 4 (Dec. 2014), 450--467. http://www.iariajournals.org/intelligent_systems/.
[41]
Patrick Harms, Steffen Herbold, and Jens Grabowski. 2014. Trace-based task tree generation. In Proceedings of the 7th International Conference on Advances in Computer-Human Interactions (ACHI’14). XPS - Xpert Publishing Services, 6.
[42]
N. Harrati, I. Bouchrika, A. Tari, and A. Ladjailia. 2015. Automating the evaluation of usability remotely for web applications via a model-based approach. In Proceedings of the 1st International Conference on New Technologies of Information and Communication (NTIC’15). 1--6.
[43]
Marcus Hegner. 2003. Methoden zur Evaluation von Software. IZ, InformationsZentrum Sozialwiss. http://books.google.de/books?id=NhnMHAAACAAJ.
[44]
David M. Hilbert and David F. Redmiles. 2000. Extracting usability information from user interface events. ACM Comput. Surv. 32, 4 (Dec. 2000), 384--421.
[45]
Yosef Hochberg. 1988. A sharper bonferroni procedure for multiple tests of significance. 75 (Dec. 1988), 800--802.
[46]
Scott E. Hudson, Bonnie E. John, Keith Knudsen, and Michael D. Byrne. 1999. A tool for creating predictive performance models from user interface demonstrations. In Proceedings of the 12th Annual ACM Symposium on User Interface Software and Technology (UIST’99). ACM, New York, NY, 93--102.
[47]
Melody Y. Ivory and Marti A. Hearst. 2001. The state of the art in automating usability evaluation of user interfaces. ACM Comput. Surv. 33, 4 (Dec. 2001), 470--516.
[48]
Anthony Jameson, Angela Mahr, Michael Kruppa, Andreas Rieger, and Robert Schleicher. 2007. Looking for unexpected consequences of interface design decisions: The MeMo workbench. In Task Models and Diagrams for User Interface Design, Marco Winckler, Hilary Johnson, and Philippe Palanque (Eds.). Lecture Notes in Computer Science, Vol. 4849. Springer Berlin, Heidelberg, 279--286.
[49]
Kasper Løvborg Jensen and Lars Bo Larsen. 2007. Evaluating the usefulness of mobile services based on captured usage data from longitudinal field trials. In Proceedings of the 4th International Conference on Mobile Technology, Applications, and Systems and the 1st International Symposium on Computer Human Interaction in Mobile Technology (Mobility’07). ACM, New York, NY, 675--682.
[50]
Bonnie E. John. 2015. CogTool. Retrieved 6, 2015 from http://cogtool.com/.
[51]
Bonnie E. John, Konstantine Prevas, Dario D. Salvucci, and Ken Koedinger. 2004. Predictive human performance modeling made easy. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’04). ACM, New York, NY, 455--462.
[52]
Christos Katsanos, Nikolaos Tselios, and Nikolaos Avouris. 2006. InfoScent evaluator: A semi-automated tool to evaluate semantic appropriateness of hyperlinks in a web site. In Proceedings of the 18th Australia Conference on Computer-Human Interaction: Design: Activities, Artefacts and Environments (OZCHI’06). ACM, New York, NY, 373--376.
[53]
Jun H. Kim, Daniel V. Gunn, Eric Schuh, Bruce Phillips, Randy J. Pagulayan, and Dennis Wixon. 2008. Tracking real-time user experience (TRUE): A comprehensive instrumentation solution for complex systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’08). ACM, New York, NY, 443--452.
[54]
Ralph Krimmel. 2014. Improving Automatic Task Tree Generation With Alignment Algorithms.
[55]
Inc. Leap Motion. 2017. Unity. Retrieved 9, 2017 from https://developer.leapmotion.com/unity/.
[56]
Andreas Lecerof and Fabio Paternò. 1998. Automatic support for usability evaluation. IEEE Trans. Softw. Eng. 24, 10 (Oct. 1998), 863--888.
[57]
Quentin Limbourg, Jean Vanderdonckt, Benjamin Michotte, Laurent Bouillon, Murielle Florins, and Daniela Trevisan. 2004. UsiXML: A user interface description language for context-sensitive user interfaces. In Proceedings of the ACM AVI’2004 Workshop “Developing User Interfaces with XML: Advances on User Interface Description Languages,” 55--62.
[58]
Matomo.org. 2018. Matomo -- Liberating Analytics. Retrieved 1, 2018 from https://matomo.org/.
[59]
David Maulsby. 1997. Inductive task modeling for user interface customization. In Proceedings of the 2nd International Conference on Intelligent User Interfaces (IUI’97). ACM, New York, NY, 233--236.
[60]
Thomas Memmel. 2009. User Interface Specification for Interactive Software Systems - Process-, Method- and Tool-Support for Interdisciplinary and Collaborative Requirements Modelling and Prototyping-Driven User Interface Specification. PhD thesis. University of Konstanz.
[61]
Microsoft. 2018. Das neue Windows Mixed Reality. Retrieved 1, 2018 from https://www.microsoft.com/de-de/windows/windows-mixed-reality.
[62]
C. Moser. 2012. User Experience Design: Mit erlebniszentrierter Softwareentwicklung zu Produkten, die begeistern. Springer Berlin Heidelberg.
[63]
Donald A. Norman. 2002. The Design of Everyday Things (1. basic paperback ed., {repr.} ed.). Basic Books, {New York, NY}.
[64]
Oculus. 2018. oculus rift. Retrieved 1, 2018 from https://www.oculus.com/rift/.
[65]
U.S. Dept. of Health and Human Services. 2006. The Research-Based Web Design 8 Usability Guidelines, Enlarged/Expanded edition. Retrieved 7, 2017 from http://guidelines.usability.gov/.
[66]
Gary M. Olson, James D. Herbsleb, and Henry H. Rueter. 1994. Characterizing the sequential structure of interactive behaviors through statistical and grammatical techniques. Hum-Comput. Interact. 9 (1994), 427--472.
[67]
Oracle. 2017. Trail: Creating a GUI With JFC/Swing. Retrieved 11, 2017 from https://docs.oracle.com/javase/tutorial/uiswing/.
[68]
Fabio Paternò. 2003. ConcurTaskTrees: An engineered notation for task models. In The Handbook of Task Analysis for Human Computer Interaction, Dan Diaper and Neville Stanton (Eds.). Lawrence Erlbaum Associates Publishers, 483--503.
[69]
Fabio Paternò. 2003. Tools for remote web usability evaluation. In Proceedings of the 10th International Conference on Human-Computer Interaction., Vol. 1. Erlbaum, 828--832. http://giove.isti.cnr.it/attachments/publications/2003-A2-95.pdf.
[70]
Fabio Paternò, Cristiano Mancini, and Silvia Meniconi. 1997. ConcurTaskTrees: A diagrammatic notation for specifying task models. In Proceedings of the IFIP TC13 International Conference on Human-Computer Interaction (INTERACT’97). Chapman 8 Hall, Ltd., London, UK, 362--369.
[71]
Fabio Paternò, Angela Piruzza, and Carmen Santoro. 2005. Remote usability analysis of MultiModal information regarding user behaviour. InInternational COST 294 Workshop on User Interface Quality Models, pp. 15--22. http://giove.isti.cnr.it/attachments/publications/2005-A2-134.pdf.
[72]
Fabio Paternò, Angela Piruzza, and Carmen Santoro. 2007. Remote web usability evaluation exploiting multimodal information on user behavior. In Computer-Aided Design of User Interfaces V, Gaëlle Calvary, Costin Pribeanu, Giuseppe Santucci, and Jean Vanderdonckt (Eds.). Springer Netherlands, 287--298.
[73]
Fabio Paternò, Andrea Russino, and Carmen Santoro. 2007. Remote evaluation of mobile applications. In Task Models and Diagrams for User Interface Design, Marco Winckler, Hilary Johnson, and Philippe Palanque (Eds.). Lecture Notes in Computer Science, Vol. 4849. Springer, Berlin, 155--169.
[74]
Fabio Paternò, Antonio Giovanni Schiavone, and Antonio Conti. 2017. Customizable automatic detection of bad usability smells in mobile accessed web applications. In Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI’17). ACM, New York, NY, Article 42, 11 pages.
[75]
Michael Quade, Marco Blumendorf, and Sahin Albayrak. 2010. Towards model-based runtime evaluation and adaptation of user interfaces. In User Modeling and Adaptation for Daily Routines: Providing Assistance to People with Special and Specific Needs.
[76]
Neil Ramsay, Stuart Marshall, and Alex Potanin. 2008. Annotating UI architecture with actual use. In Proceedings of the Ninth Conference on Australasian User Interface -- Volume 76 (AUIC’08). Australian Computer Society, Inc., Darlinghurst, Australia, 75--78. http://dl.acm.org/citation.cfm?id=1378337.1378351.
[77]
Karen Renaud and Phil Gray. 2004. Making sense of low-level usage data to understand user activities. In Proceedings of the Annual Research Conference of the South African Institute of Computer Scientists and Information Technologists on IT Research in Developing Countries (SAICSIT’04). South African Institute for Computer Scientists and Information Technologists, Republic of South Africa, 115--124. http://dl.acm.org/citation.cfm?id=1035053.1035067.
[78]
Michael Richter and Markus D. Flückiger. 2013. Usability Engineering kompakt: Benutzbare Software gezielt entwickeln. Springer Berlin, Heidelberg.
[79]
Joy Robinson, Candice Lanius, and Ryan Weber. 2017. The past, present, and future of UX empirical research. Commun. Des. Q. Rev 5, 3 (Nov. 2017), 10--23.
[80]
Stephanie Rosenbaum. 2008. The future of usability evaluation: Increasing impact on value. In Maturing Usability, Effie Lai-Chong Law, Ebba Thora Hvannberg, and Gilbert Cockton (Eds.). Springer, 344--378. http://dblp.uni-trier.de/db/series/hci/LawHC08.html#Rosenbaum08.
[81]
Jean-David Ruvini et al. 2000. APE: Learning user’s habits to automate repetitive tasks. In Proceedings of the Conference on Intelligent User Interfaces (IUI’00). 229--232.
[82]
Dario D. Salvucci. 2009. Rapid prototyping and evaluation of in-vehicle interfaces. ACM Trans. Comput.-Hum. Interact. 16, 2, Article 9 (June 2009), 33 pages.
[83]
Florian Sarodnick and Henning Brau. 2006. Methoden der Usability Evaluation: Wissenschaftliche Grundlagen und praktische Anwendung (1st ed.). Huber, Bern.
[84]
Andrew Sears. 1995. AIDE: A step toward metric-based interface development tools. In Proceedings of the 8th Annual ACM Symposium on User Interface and Software Technology (UIST’95). ACM, New York, NY, 101--110.
[85]
Antonio C. Siochi and Roger W. Ehrich. 1991. Computer analysis of user interfaces based on repetition in transcripts of user sessions. ACM Trans. Inf. Syst. 9, 4 (Oct. 1991), 309--335.
[86]
Antonio C. Siochi and Deborah Hix. 1991. A study of computer-supported user interface evaluation using maximal repeating pattern analysis. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’91). ACM, New York, NY, 301--305.
[87]
Jenifer Tidwell. 2010. Designing Interfaces -- Patterns for Effective Interaction Design (2nd ed.). O’Reilly Media, Incorporated. http://books.google.de/books?id=5gvOU9X0fu0C.
[88]
Thomas Tiedtke, Christian Märtin, and Norbert Gerth. 2002. AWUSA -- A Tool for Automated Website Usability Analysis. In Proceedings of the 9th International Workshop on the Design, Specification and Verification of Interactive Systems (DSV-IS’02).
[89]
H. Trætteberg. 2002. Model-based User Interface Design. Information Systems Group, Department of Computer and Information Sciences, Faculty of Information Technology, Mathematics and Electrical Engineering, Norwegian University of Science and Technology.
[90]
Thomas Tullis and William Albert. 2008. Measuring the User Experience: Collecting, Analyzing, and Presenting Usability Metrics. Morgan Kaufmann Publishers Inc., San Francisco, CA.
[91]
Unity Technologies. 2017. Unity 3D. Retrieved 9, 2017 from https://unity3d.com.
[92]
Martijn Van Welie, Gerrit C. Van Der Veer, and Anton Eliëns. 1998. An ontology for task world models. In Proceedings of Design, Specification and Verification of Interactive Systems (DSV-IS’98), Abingdon. http://citeseerx.ist.psu.edu/viewdoc/summary.
[93]
Google VR. 2017. Google Cardboard. Retrieved 9, 2017 from https://vr.google.com/cardboard/.
[94]
Google VR. 2017. Google Daydream. Retrieved 9, 2017 from https://vr.google.com/daydream/.
[95]
VRgluv. 2018. VRgluv. Retrieved 1, 2018 from https://vrgluv.com/.
[96]
W3Schools. 2017. HTML5 Tutorial. Retrieved 11, 2017 from https://www.w3schools.com/html/default.asp.
[97]
G. Wallner and S. Kriglstein. 2014. PLATO: A visual analytics system for gameplay data. Comput. Graph. 38 (2014), 341--356.
[98]
Frank Wilcoxon. 1945. Individual comparisons by ranking methods. Biomet. Bull. 1, 6 (1945), 80--83. http://www.jstor.org/stable/3001968.
[99]
D. Zwillinger and S. Kokoska. 1999. CRC Standard Probability and Statistics Tables and Formulae. CRC.

Cited By

View all
  • (2024)AI-Driven Real-Virtual Responsive User Interface and Technology Requirements for Immersive AR Cultural ContentJOURNAL OF BROADCAST ENGINEERING10.5909/JBE.2024.29.5.60629:5(606-615)Online publication date: 30-Sep-2024
  • (2024)Methods for Evaluating Immersive 3D Virtual Environments: a Systematic Literature ReviewProceedings of the 26th Symposium on Virtual and Augmented Reality10.1145/3691573.3691595(140-151)Online publication date: 30-Sep-2024
  • (2024)SIM2VR: Towards Automated Biomechanical Testing in VRProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676452(1-15)Online publication date: 13-Oct-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Transactions on Computer-Human Interaction
ACM Transactions on Computer-Human Interaction  Volume 26, Issue 3
June 2019
254 pages
ISSN:1073-0516
EISSN:1557-7325
DOI:10.1145/3328720
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 24 April 2019
Accepted: 01 December 2018
Revised: 01 October 2018
Received: 01 January 2018
Published in TOCHI Volume 26, Issue 3

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Automated usability evaluation
  2. task tree generation
  3. usability smell detection
  4. virtual reality

Qualifiers

  • Research-article
  • Research
  • Refereed

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)224
  • Downloads (Last 6 weeks)27
Reflects downloads up to 24 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)AI-Driven Real-Virtual Responsive User Interface and Technology Requirements for Immersive AR Cultural ContentJOURNAL OF BROADCAST ENGINEERING10.5909/JBE.2024.29.5.60629:5(606-615)Online publication date: 30-Sep-2024
  • (2024)Methods for Evaluating Immersive 3D Virtual Environments: a Systematic Literature ReviewProceedings of the 26th Symposium on Virtual and Augmented Reality10.1145/3691573.3691595(140-151)Online publication date: 30-Sep-2024
  • (2024)SIM2VR: Towards Automated Biomechanical Testing in VRProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676452(1-15)Online publication date: 13-Oct-2024
  • (2024)Exploring the Impact of Artificial Intelligence-Generated Content (AIGC) Tools on Social Dynamics in UX CollaborationProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3660703(1594-1606)Online publication date: 1-Jul-2024
  • (2024)Enhancing UX Evaluation Through Collaboration with Conversational AI Assistants: Effects of Proactive Dialogue and TimingProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642168(1-16)Online publication date: 11-May-2024
  • (2024)User eXperience (UX) Evaluation in Virtual Reality (VR)Information Systems and Technologies10.1007/978-3-031-45642-8_20(207-215)Online publication date: 16-Feb-2024
  • (2023)Design of Control Elements in Virtual Reality—Investigation of Factors Influencing Operating Efficiency, User Experience, Presence, and WorkloadApplied Sciences10.3390/app1315866813:15(8668)Online publication date: 27-Jul-2023
  • (2023)Effects of virtual reality and test environment on user experience, usability, and mental workload in the evaluation of a blood pressure monitorFrontiers in Virtual Reality10.3389/frvir.2023.11511904Online publication date: 13-Jun-2023
  • (2023)Test automation for augmented reality applications: a development process model and case studyi-com10.1515/icom-2023-002922:3(175-192)Online publication date: 9-Nov-2023
  • (2023)DUX: a dataset of user interactions and user emotionsi-com10.1515/icom-2023-001422:2(101-123)Online publication date: 15-Aug-2023
  • Show More Cited By

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media