Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3173574.3173618acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

Rethinking Thinking Aloud: A Comparison of Three Think-Aloud Protocols

Published: 19 April 2018 Publication History

Abstract

This paper presents the results of a study that compared three think-aloud methods: concurrent think-aloud, retrospective think-aloud, and a hybrid method. The three methods were compared through an evaluation of a library website, which involved four points of comparison: task performance, participants' experiences, usability problems discovered, and the cost of employing the methods. The results revealed that the concurrent method outperformed both the retrospective and the hybrid methods in facilitating successful usability testing. It detected higher numbers of usability problems than the retrospective method, and produced output comparable to that of the hybrid method. The method received average to positive ratings from its users, and no reactivity was observed. Lastly, this method required much less time on the evaluator's part than did the other two methods, which involved double the testing and analysis time.

References

[1]
Obead Alhadreti. 2016. Thinking about thinking aloud: an investigation of think-aloud methods in usability testing (Doctoral dissertation, University of East Anglia).
[2]
Obead Alhadreti and Pam Mayhew. 2017. To Intervene or Not to Intervene: An Investigation of Three ThinkAloud Protocols in Usability Testing. Journal of Usability Studies, 12(3).
[3]
Ali Alnashri, Obead Alhadreti, and Pam Mayhew. 2016. The Influence of Participant Personality in Usability Tests. International Journal of Human Computer Interaction (IJHCI), 7(1), p.1.
[4]
Thamer Alshammari, Obead Alhadreti, and Pam Mayhew. 2015. When to ask participants to think aloud: A comparative study of concurrent and retrospective think-aloud methods. International Journal of Human Computer Interaction, 6(3), 48--64.
[5]
Morten Andreasen, Villemann Henrik, Simon Schrøder, and Jan Stage. 2007. What happened to remote usability testing?: an empirical study of three methods. In Proceedings of the SIGCHI conference on Human factors in computing systems, 1405--1414. ACM.
[6]
Carol Barnum. The 'magic number 5': Is it enough for web-testing?. Information Design Journal 11, 160--170.
[7]
Wolmet Barendregt, Mathilde Bekker, D. G Bouwhuis, and Ester Baauw. 2006. Identifying usability and fun problems in a computer game during first use and after some practice. International Journal of HumanComputer Studies, 64(9), 830--846.
[8]
John Brooke. 1996. SUS-A quick and dirty usability scale. Usability evaluation in industry, 189(194), 4--7.
[9]
Joseph Dumas and Janice Redish. 1999. A practical guide to usability testing. Intellect books.
[10]
Maria Ebling and Bonnie John. 2000. On the contributions of different empirical data in usability testing. In Proceedings of the 3rd conference on Designing interactive systems: processes, practices, methods, and techniques (pp. 289--296). ACM.
[11]
K. Anders Ericsson and Herbert Simon. 1980. Verbal reports as data. Psychological review, 87(3), 215.
[12]
K. Anders Ericsson and Herbert Simon. 1984. Protocol Analysis: Verbal Reports as Data. Cambridge: MIT Press.
[13]
K. Anders Ericsson and Herbert Simon. 1993. Protocol Analysis: Verbal Reports as Data, Revised edition. Cambridge: MIT Press.
[14]
Asbjørn Følstad and Kasper Hornbæk. 2010. Workdomain knowledge in usability evaluation: Experiences with Cooperative Usability Testing. Journal of systems and software, 83(11), 2019--2030.
[15]
Wayne Gray and Marilyn Salzman. 1998. Damaged merchandise? A review of experiments that compare usability evaluation methods. Human-computer interaction, 13(3), 203--261.
[16]
Layla Hasan. 2009. Usability evaluation framework for e-commerce websites in developing countries (Doctoral dissertation, © Layla Hasan).
[17]
Morten Hertzum. 2006. Problem prioritization in usability evaluation: From severity assessments toward impact on design. International Journal of HumanComputer Interaction, 21(2), 125--146.
[18]
Morten Hertzum and Niels Jacobsen, N. E. 2001. The evaluator effect: A chilling fact about usability evaluation methods. International Journal of HumanComputer Interaction, 13(4), 421--443.
[19]
Kasper Hornbæk. 2010. Dogmas in the assessment of usability evaluation methods. Behaviour & Information Technology, 29(1), 97--111.
[20]
Jonathan Lazar, Jinjuan Feng, and Harry Hochheiser. 2017. Research methods in human-computer interaction. Morgan Kaufmann.
[21]
Lewis, C., 1982. Using the" thinking-aloud" method in cognitive interface design. IBM TJ Watson Research Center.
[22]
Gitte Lindgaard and Jarinee Chattratichart. 2007. Usability testing: what have we overlooked?. In Proceedings of the SIGCHI conference on Human factors in computing systems (pp. 1415--1424). ACM.
[23]
Rob Martin, MA Shamari, Mohamed Seliaman, and Pam Mayhew. 2014. Remote asynchronous testing: A cost-effective alternative for website usability evaluation. International Journal of Computer and Information Technology, 3(1), 99--104.
[24]
Sharon McDonald, Helen Edwards, and Tingting Zhao. 2012. Exploring think-alouds in usability testing: An international survey. IEEE Transactions on Professional Communication, 55(1), 2--19.
[25]
Sharon McDonald, Tingting Zhao, and Helen Edwards, H. M. (2013). Dual verbal elicitation: the complementary use of concurrent and retrospective reporting within a usability test. International Journal of Human-Computer Interaction, 29(10), 647--660.
[26]
Jakob Nielsen. 1994. Usability engineering. Elsevier.
[27]
Jakob Nielsen and Thomas Landauer. 1993. A mathematical model of the finding of usability problems. In Proceedings of the INTERACT'93 and CHI'93 conference on Human factors in computing systems (pp. 206--213). ACM.
[28]
Jakob Nielsen .2000. Why You Only Need to Test with 5 Users. {Online} NN Group Available at https://www.nngroup.com/articles/why-you-onlyneedto-test-with-5-users
[29]
Kenneth Ohnemus and David Biers. 1993. Retrospective versus concurrent thinking-out-loud in usability testing. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 37, No. 17, pp. 1127--1131). Sage CA: Los Angeles, CA: SAGE Publications.
[30]
Erica Olmsted-Hawala, Elizabeth Murphy, Sam Hawala, and Kathleen Ashenfelter. 2010. Think aloud protocols: a comparison of three think-aloud protocols for use in testing data-dissemination web sites for usability. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 23812390). ACM.
[31]
Erica Olmsted-Hawala and Jennifer Bergstrom. 2012. Think-aloud protocols: does age make a difference. Proceedings of Society for Technical Communication (STC) Summit, Chicago, IL.
[32]
Linda Peute, Nicolette de Keizer, and M. W. Jaspers. 2010. Cognitive evaluation of a physician data query tool for a national ICU registry: comparing two think aloud variants and their application in redesign. Studies in Health Technology and Informatics, 160(1), 309--313.
[33]
Andreas Sonderegger, Sven Schmutz, and Juergen Sauer. 2016. The influence of age in usability testing. Applied Ergonomics, 52, 291--300.
[34]
Deborah Sova, Jakob Nielsen, and NN GROUP. 2003. 234 Tips and Tricks for Recruiting Users as Participants in Usability Studies. {Online} NN Group. Available at: http://www.nngroup.com/reports/tips/recruiting/234_re cruiting_tips.
[35]
Thomas Tullis and Bill Albert. 2008. Measuring the user experience. Burlington: Elsevier Inc.
[36]
Maaike Van den Haak, Menno de Jong, and Peter Schellens. 2004. Employing think-aloud protocols and constructive interaction to test the usability of online library catalogues: a methodological comparison. Interacting with computers, 16(6), 1153--1170.
[37]
Arnold POS Vermeeren, Bouwmeester Karin den, Aasman Jans, and de Ridder Huib. DEVAN: a tool for detailed video analysis of user test data. Behaviour & Information Technology 21, no. 6 (2002): 403--423.
[38]
Tingting Zhao, and Sharon McDonald. 2010. Keep talking: an analysis of participant utterances gathered using two concurrent think--aloud methods. In Proceedings of the 6th Nordic Conference on Human-- Computer Interaction: Extending Boundaries. New York: ACM. pp. 581--590.
[39]
Tingting Zhao, Sharon McDonald, and Helen Edwards. 2012. The impact of two different think--aloud instructions in a usability test: a case of just following orders?. Behavior and Information Technology, 33(2), 163--183.

Cited By

View all
  • (2024)Sound design ideation: a structured and systematic analysisProceedings of the 19th International Audio Mostly Conference: Explorations in Sonic Cultures10.1145/3678299.3678350(492-501)Online publication date: 18-Sep-2024
  • (2024)ARCube: Hybrid Spatial Interaction for Immersive AudioProceedings of the 2024 ACM Symposium on Spatial User Interaction10.1145/3677386.3688883(1-3)Online publication date: 7-Oct-2024
  • (2024)Design and Evaluation of a Visual Query Interface for Maritime Route PlanningAdjunct Proceedings of the 2024 Nordic Conference on Human-Computer Interaction10.1145/3677045.3685420(1-5)Online publication date: 13-Oct-2024
  • Show More Cited By

Index Terms

  1. Rethinking Thinking Aloud: A Comparison of Three Think-Aloud Protocols

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CHI '18: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems
    April 2018
    8489 pages
    ISBN:9781450356206
    DOI:10.1145/3173574
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 19 April 2018

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. human-computer interaction
    2. think-aloud protocols
    3. usability testing
    4. user experiences
    5. user studies

    Qualifiers

    • Research-article

    Conference

    CHI '18
    Sponsor:

    Acceptance Rates

    CHI '18 Paper Acceptance Rate 666 of 2,590 submissions, 26%;
    Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)617
    • Downloads (Last 6 weeks)73
    Reflects downloads up to 03 Oct 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Sound design ideation: a structured and systematic analysisProceedings of the 19th International Audio Mostly Conference: Explorations in Sonic Cultures10.1145/3678299.3678350(492-501)Online publication date: 18-Sep-2024
    • (2024)ARCube: Hybrid Spatial Interaction for Immersive AudioProceedings of the 2024 ACM Symposium on Spatial User Interaction10.1145/3677386.3688883(1-3)Online publication date: 7-Oct-2024
    • (2024)Design and Evaluation of a Visual Query Interface for Maritime Route PlanningAdjunct Proceedings of the 2024 Nordic Conference on Human-Computer Interaction10.1145/3677045.3685420(1-5)Online publication date: 13-Oct-2024
    • (2024)Concurrent or Retrospective Thinking Aloud in Usability Tests: A Meta-Analytic ReviewACM Transactions on Computer-Human Interaction10.1145/366532731:3(1-29)Online publication date: 17-May-2024
    • (2024)Can a Funny Chatbot Make a Difference? Infusing Humor into Conversational Agent for Behavioral InterventionProceedings of the 6th ACM Conference on Conversational User Interfaces10.1145/3640794.3665555(1-19)Online publication date: 8-Jul-2024
    • (2024)Digital occupancy assessment for lighting evaluation: a pilot study to prepare for real-time research resultsArchitectural Science Review10.1080/00038628.2024.2377329(1-10)Online publication date: 12-Jul-2024
    • (2024)Testing Usability of Tools for Making PDFs Accessible: Pressing Issues and Pain PointsComputers Helping People with Special Needs10.1007/978-3-031-62846-7_7(55-62)Online publication date: 5-Jul-2024
    • (2024)Methodological Challenges in Victimisation StudiesUnderstanding Prisoner Victimisation10.1007/978-3-031-54350-0_6(143-173)Online publication date: 17-Apr-2024
    • (2023)What is a File on a Phone? Personal Information Management Practices Amongst WhatsApp UsersProceedings of the ACM on Human-Computer Interaction10.1145/36102217:CSCW2(1-28)Online publication date: 4-Oct-2023
    • (2023)AutoChemplete - Making Chemical Structural Formulas AccessibleProceedings of the 20th International Web for All Conference10.1145/3587281.3587293(104-115)Online publication date: 30-Apr-2023
    • Show More Cited By

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media