Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3462204.3481756acmconferencesArticle/Chapter ViewAbstractPublication PagescscwConference Proceedingsconference-collections
research-article

Exploring Users’ Preferences for Chatbot’s Guidance Type and Timing

Published: 23 October 2021 Publication History

Abstract

While task-oriented chatbots have become popular recently, conversational breakdowns are still common and will often lead to unfavorable user experiences. Guidance serves a crucial role in helping users to understand how to have better interaction with chatbots. Nonetheless, questions like what kinds of guidance to provide and when to provide guidance remain underexplored. In this study, we examined users’ preferences for two types of guidance (Example-Based and Rule-Based) at four guidance timings (Service- Onboarding, Task-Intro, After-Failure, and Upon-Request). Our results show that users preferred Example-based guidance, and generally preferred guidance provided at Task-Intro. Example-based guidance appearing at Task-Intro was the favorite guidance combination for most participants. Through analysis of participants’ explanations of their preferences, the strengths and weaknesses of these guidance types and guidance timings are presented. The preliminary results are based on a subset of the data (n=24). Further in-depth investigation into the underlying factors that influence users’ preferences for guidance, as well as the interplay effect between guidance type and guidance timing is needed.

References

[1]
Rachel Arthur. 2016. Burberry Is Also Experimenting With Chatbots For London Fashion Week. Retrieved June 5, 2021 from https://www.forbes.com/sites/rachelarthur/2016/09/17/burberry-is-also-experimenting-with-chatbots-for-london-fashion-week/?sh=386a969dffd4
[2]
Zahra Ashktorab, Mohit Jain, Q Vera Liao, and Justin D Weisz. 2019. Resilient chatbots: Repair strategy preferences for conversational breakdowns. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–12.
[3]
Petter Bae Bae Brandtzæg, Marita Skjuve, Kim Kristoffer Kristoffer Dysthe, and Asbjørn Følstad. 2021. When the Social Becomes Non-Human: Young People’s Perception of Social Support in Chatbots. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–13.
[4]
Marion Boiteux. 2018. Messenger at F8 2018. Retrieved January 16, 2021 from https://blog.messengerdevelopers.com/messenger-at-f8-2018-44010dc9d2ea
[5]
Carrie J Cai, Jonas Jongejan, and Jess Holbrook. 2019. The effects of example-based explanations in a machine learning interface. In Proceedings of the 24th International Conference on Intelligent User Interfaces. 258–262.
[6]
Donald J Campbell. 1988. Task complexity: A review and analysis. Academy of management review 13, 1 (1988), 40–52.
[7]
Fabio Clarizia, Francesco Colace, Marco Lombardi, Francesco Pascale, and Domenico Santaniello. 2018. Chatbot: An education support system for student. In International Symposium on Cyberspace Safety and Security. Springer, 291–302.
[8]
Nick Ellis. 1993. Rules and instances in foreign language learning: Interactions of explicit and implicit knowledge. European Journal of Cognitive Psychology 5, 3 (1993), 289–318.
[9]
Asbjørn Følstad, Petter Bae Brandtzæg, Tom Feltwell, Effie LC Law, Manfred Tscheligi, and Ewa A Luger. 2018. SIG: chatbots for social good. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems. 1–4.
[10]
Asbjørn Følstad and Ragnhild Halvorsrud. 2020. Communicating Service Offers in a Conversational User Interface: An Exploratory Study of User Preferences in Chatbot Interaction. In 32nd Australian Conference on Human-Computer Interaction. 671–676.
[11]
Milton Friedman. 1940. A comparison of alternative tests of significance for the problem of m rankings. The Annals of Mathematical Statistics 11, 1 (1940), 86–92.
[12]
Frederick J Gravetter and Lori-Ann B Forzano. 2018. Research methods for the behavioral sciences. Cengage Learning.
[13]
Robert R Hoffman, Shane T Mueller, Gary Klein, and Jordan Litman. 2018. Metrics for explainable AI: Challenges and prospects. arXiv preprint arXiv:1812.04608(2018).
[14]
Mohit Jain, Pratyush Kumar, Ramachandra Kota, and Shwetak N Patel. 2018. Evaluating and informing the design of chatbots. In Proceedings of the 2018 Designing Interactive Systems Conference. 895–906.
[15]
F Kaptein, J Broekens, K Hindriks, and MA Neerincx. 2021. Evaluating XAI: A comparison of rule-based and example-based explanations. Artificial Intelligence 291, 291 (2021).
[16]
KAYAK. 2016. KAYAK + Facebook Messenger. Retrieved June 5, 2021 from https://www.kayak.com/news/kayak-facebook-messenger/
[17]
Philipp Kirschthaler, Martin Porcheron, and Joel E Fischer. 2020. What Can I Say? Effects of Discoverability in VUIs on Task Performance and User Experience. In Proceedings of the 2nd Conference on Conversational User Interfaces. 1–9.
[18]
Knut Kvale, Olav Alexander Sell, Stig Hodnebrog, and Asbjørn Følstad. 2019. Improving Conversations: Lessons Learnt from Manual Analysis of Chatbot Dialogues. In International Workshop on Chatbot Research and Design. Springer, 187–200.
[19]
Raina Langevin, Ross J Lordon, Thi Avrahami, Benjamin R Cowan, Tad Hirsch, and Gary Hsieh. 2021. Heuristic Evaluation of Conversational Agents. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–15.
[20]
SeoYoung Lee and Junho Choi. 2017. Enhancing user experience with conversational agent for movie recommendation: Effects of self-disclosure and reciprocity. International Journal of Human-Computer Studies 103 (2017), 95–105.
[21]
Chi-Hsun Li, Su-Fang Yeh, Tang-Jie Chang, Meng-Hsuan Tsai, Ken Chen, and Yung-Ju Chang. 2020. A Conversation Analysis of Non-Progress and Coping Strategies with a Banking Task-Oriented Chatbot. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–12.
[22]
Xiujun Li, Yun-Nung Chen, Lihong Li, Jianfeng Gao, and Asli Celikyilmaz. 2017. End-to-end task-completion neural dialogue systems. arXiv preprint arXiv:1703.01008(2017).
[23]
Brian Y Lim, Anind K Dey, and Daniel Avrahami. 2009. Why and why not explanations improve the intelligibility of context-aware intelligent systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2119–2128.
[24]
Chelsea M Myers, Anushay Furqan, and Jichen Zhu. 2019. The impact of user characteristics and preferences on performance with an unfamiliar voice user interface. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–9.
[25]
Jakob Nielsen. 1994. How to Conduct a Heuristic Evaluation.Retrieved June 10, 2021 from https://www.nngroup.com/articles/how-to-conduct-a-heuristic-evaluation/
[26]
Jakob Nielsen. 2010. Mental Models. Retrieved January 16, 2021 from https://www.nngroup.com/articles/mental-models/
[27]
Jakob Nielsen. 2011. Workflow Expectations: Presenting Steps at the Right Time. Retrieved January 16, 2021 from https://www.nngroup.com/articles/workflow-expectations/
[28]
Rifat Rahman, Md Rishadur Rahman, Nafis Irtiza Tripto, Mohammed Eunus Ali, Sajid Hasan Apon, and Rifat Shahriyar. 2021. AdolescentBot: Understanding Opportunities for Chatbots in Combating Adolescent Sexual and Reproductive Health Problems in Bangladesh. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–15.
[29]
Alexander Renkl. 2014. Toward an instructionally oriented theory of example-based learning. Cognitive science 38, 1 (2014), 1–37.
[30]
Ruhi Sarikaya. 2017. The technology behind personal digital assistants: An overview of the system architecture and key components. IEEE Signal Processing Magazine 34, 1 (2017), 67–81.
[31]
Joseph Seering, Juan Pablo Flores, Saiph Savage, and Jessica Hammer. 2018. The social roles of bots: evaluating impact of bots on discussions in online communities. Proceedings of the ACM on Human-Computer Interaction 2, CSCW(2018), 1–29.
[32]
Arjun Srinivasan, Mira Dontcheva, Eytan Adar, and Seth Walker. 2019. Discovering natural language commands in multimodal interfaces. In Proceedings of the 24th International Conference on Intelligent User Interfaces. 661–672.
[33]
Justin D Weisz, Mohit Jain, Narendra Nath Joshi, James Johnson, and Ingrid Lange. 2019. BigBlueBot: teaching strategies for successful human-agent interactions. In Proceedings of the 24th International Conference on Intelligent User Interfaces. 448–459.
[34]
Sihan Yuan, Birgit Brüggemeier, Stefan Hillmann, and Thilo Michael. 2020. User Preference and Categories for Error Responses in Conversational User Interfaces. In Proceedings of the 2nd Conference on Conversational User Interfaces. 1–8.

Cited By

View all
  • (2024)Designing the Conversational Agent: Asking Follow-up Questions for Information ElicitationProceedings of the ACM on Human-Computer Interaction10.1145/36373208:CSCW1(1-30)Online publication date: 26-Apr-2024
  • (2023)How much is a “feedback” worth? User engagement and interaction for computer-supported adaptive quizzingInteractive Learning Environments10.1080/10494820.2023.2176521(1-16)Online publication date: 1-Mar-2023

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
CSCW '21 Companion: Companion Publication of the 2021 Conference on Computer Supported Cooperative Work and Social Computing
October 2021
370 pages
ISBN:9781450384797
DOI:10.1145/3462204
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 23 October 2021

Permissions

Request permissions for this article.

Check for updates

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

  • Ministry of Science and Technology, R.O.C

Conference

CSCW '21
Sponsor:

Acceptance Rates

Overall Acceptance Rate 2,235 of 8,521 submissions, 26%

Upcoming Conference

CSCW '24

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)44
  • Downloads (Last 6 weeks)7
Reflects downloads up to 02 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Designing the Conversational Agent: Asking Follow-up Questions for Information ElicitationProceedings of the ACM on Human-Computer Interaction10.1145/36373208:CSCW1(1-30)Online publication date: 26-Apr-2024
  • (2023)How much is a “feedback” worth? User engagement and interaction for computer-supported adaptive quizzingInteractive Learning Environments10.1080/10494820.2023.2176521(1-16)Online publication date: 1-Mar-2023

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media