Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/1518701.1519023acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

Why and why not explanations improve the intelligibility of context-aware intelligent systems

Published: 04 April 2009 Publication History

Abstract

Context-aware intelligent systems employ implicit inputs, and make decisions based on complex rules and machine learning models that are rarely clear to users. Such lack of system intelligibility can lead to loss of user trust, satisfaction and acceptance of these systems. However, automatically providing explanations about a system's decision process can help mitigate this problem. In this paper we present results from a controlled study with over 200 participants in which the effectiveness of different types of explanations was examined. Participants were shown examples of a system's operation along with various automatically generated explanations, and then tested on their understanding of the system. We show, for example, that explanations describing why the system behaved a certain way resulted in better understanding and stronger feelings of trust. Explanations describing why the system did not behave a certain way, resulted in lower understanding yet adequate performance. We discuss implications for the use of our findings in real-world context-aware applications.

References

[1]
Amazon Mechanical Turk. http://www.mturk.com. Retrieved 16 Sep 2008.
[2]
Anderson, J. R., Corbett, A. T., Koedinger, K. R.,&Pelletier, R. (1995). Cognitive Tutors: Lessons Learned. Journal of the Learning Sciences, 4(2), 167--207.
[3]
Andrews, R., Diederich, J.&Tickle, A. (1995). A Survey and Critique of Techniques for Extracting Rules from Trained Artificial Neural Networks. Knowledge Based Systems. 8: 373--389.
[4]
Assad, M., Carmichael, D.J., Kay, J.&Kummerfeld, B. (2007). PersonisAD: Distributed, Active, Scrutable Model Framework for Context-Aware Services. Pervasive 2007, 55--72.
[5]
Avrahami, D. and Hudson, S.E. (2006). Responsiveness in Instant Messaging: Predictive Models Supporting Inter-Personal Communication. CHI'06, 731--740.
[6]
Bellotti, V.&Edwards, W.K. (2001). Intelligibility and Accountability: Human Considerations in Context-Aware Systems, Human-Computer Interaction, 16(2-4): 193--212.
[7]
Bellotti, V., Back, M., Edwards, W.K., Grinter, R.E., Henderson, A.&Lopes, C. (2002). Making sense of sensing: Five questions for designers and researchers. CHI'02, 415--422.
[8]
Cheverst, K., Byun, H.E., Fitton, D., Sas, C., Kray, C. and Villar, N. (2005). Exploring issues of user model transparency and proactive behavior in an office environment control system. User Modeling and User-Adapted Interaction: Journal of Personalization Research, 15(3-4), 235--273.
[9]
Davis, R., Buchanan, B., and Shortliffe, E. (1977). Production rules as a representation for a knowledge-based consultation program. Artificial Intelligence, 8(1): 15--45.
[10]
Dey, A.K.&Newberger, A. (2009). Support for contxt-aware intelligibility and control. To appear in CHI'09.
[11]
Dourish, P., Adler, A.&Smith, B.C. (1996). Organising User Interfaces Around Reflective Accounts. Reflection'96, 235--244.
[12]
Dzindolet, M., Peterson, S., Pomranky, S. Pierce, L.&Beck, H. (2003). The role of trust in automation reliance, Int'l Journal of Human-Computer Studies, 58(6): 697--718.
[13]
Google Web Toolkit. http://code.google.com/webtoolkit/. Retrieved 16 Sep 2008.
[14]
Gregor, S.&Benbasat, I. (1999). Explanations From Intelligent Systems: Theoretical Foundations and Implications for Practice. MIS Quarterly 23(4): 497--530.
[15]
Herlocker, J., Konstan, J.&Riedl, J. (2000). Explaining collaborative filtering recommendations. CSCW 2000, 241--250.
[16]
Ko, A.J.&Myers, B.A. (2004). Designing the WhyLine: A debugging interface for asking questions about programm failures. CHI 2004, 151--158.
[17]
Mozina M., Demsar, J., Kattan, M.,&Zupan, B. (2004). Nomograms for Visualization of Naive Bayesian Classifier. PKDD 2004, 337--348.
[18]
McGuinness, D.L., Glass, A., Wolverton, M.&Pinheiro da Silva, P. (2007). A Categorization of Explanation Questions for Task Processing Systems. AAAI Workshop on Explanation-Aware Computing.
[19]
Muir, B. (1994). Trust in automation: Part i. theoretical issues in the study of trust and human intervention in automated systems. Ergonomics, 37(11): 1905--1922.
[20]
Myers, B.A., Weitzman, D., Ko, A.J.&Chau, D.H. (2006). Answering Why and Why Not Questions in User Interfaces. CHI 2006, 397--406.
[21]
Norman, Donald A. (1988). The Design of Everyday Things. New York, Doubleday.
[22]
Nugent, C.,&Cunningham, P. (2005). A case-based explanation system for black-box systems. Artificial Intelligence Review, 24(2): 163--178.
[23]
Schilit, B.N., Adams, N.I.&Want, R. (1994). Context-aware computing applications. 1st International Workshop on Mobile Computing Systems and Applications, 85--90.
[24]
Tullio, J., Dey, A.K., Fogarty, J.&Chalecki, J. (2007). How it works: A field study of non-technical users interacting with an intelligent system. CHI 2007, 31--40.
[25]
Quinlan, J. R. (1993). C4.5: Programs for Machine Learning. Morgan Kaufmann Publishers.
[26]
Weiser, M. (1991). The computer for the 21st century. Scientific American, 265(3): 94--104.
[27]
Weiser, M.&Brown, J.S., (1995). Designing Calm Technology. PowerGrid Journal v1.01.

Cited By

View all
  • (2025)Comprehension is a double-edged sword: Over-interpreting unspecified information in intelligible machine learning explanationsInternational Journal of Human-Computer Studies10.1016/j.ijhcs.2024.103376193(103376)Online publication date: Jan-2025
  • (2024)Explanation Ontology: A general-purpose, semantic representation for supporting user-centered explanationsSemantic Web10.3233/SW-23328215:4(959-989)Online publication date: 4-Oct-2024
  • (2024)Let Me Explain What I Did or What I Would Have Done: An Empirical Study on the Effects of Explanations and Person-Likeness on Trust in and Understanding of AlgorithmsProceedings of the 13th Nordic Conference on Human-Computer Interaction10.1145/3679318.3685351(1-13)Online publication date: 13-Oct-2024
  • Show More Cited By

Index Terms

  1. Why and why not explanations improve the intelligibility of context-aware intelligent systems

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CHI '09: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
    April 2009
    2426 pages
    ISBN:9781605582467
    DOI:10.1145/1518701
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 04 April 2009

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. context-aware
    2. explanations
    3. intelligibility

    Qualifiers

    • Research-article

    Conference

    CHI '09
    Sponsor:

    Acceptance Rates

    CHI '09 Paper Acceptance Rate 277 of 1,130 submissions, 25%;
    Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)354
    • Downloads (Last 6 weeks)48
    Reflects downloads up to 04 Oct 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2025)Comprehension is a double-edged sword: Over-interpreting unspecified information in intelligible machine learning explanationsInternational Journal of Human-Computer Studies10.1016/j.ijhcs.2024.103376193(103376)Online publication date: Jan-2025
    • (2024)Explanation Ontology: A general-purpose, semantic representation for supporting user-centered explanationsSemantic Web10.3233/SW-23328215:4(959-989)Online publication date: 4-Oct-2024
    • (2024)Let Me Explain What I Did or What I Would Have Done: An Empirical Study on the Effects of Explanations and Person-Likeness on Trust in and Understanding of AlgorithmsProceedings of the 13th Nordic Conference on Human-Computer Interaction10.1145/3679318.3685351(1-13)Online publication date: 13-Oct-2024
    • (2024)Visualization for Recommendation Explainability: A Survey and New PerspectivesACM Transactions on Interactive Intelligent Systems10.1145/367227614:3(1-40)Online publication date: 11-Jun-2024
    • (2024)Opinion Mining with Interpretable Random Density ForestsProceedings of the 2024 8th International Conference on Machine Learning and Soft Computing10.1145/3647750.3647761(66-72)Online publication date: 26-Jan-2024
    • (2024)exHARProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36435008:1(1-30)Online publication date: 6-Mar-2024
    • (2024)Between Trust and Identity: Form, Function, and PresentationProceedings of the 6th ACM Conference on Conversational User Interfaces10.1145/3640794.3669999(1-4)Online publication date: 8-Jul-2024
    • (2024)FamilyScope: Visualizing Affective Aspects of Family Social Interactions using Passive Sensor DataProceedings of the ACM on Human-Computer Interaction10.1145/36373348:CSCW1(1-27)Online publication date: 26-Apr-2024
    • (2024)Representation Debiasing of Generated Data Involving Domain ExpertsAdjunct Proceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization10.1145/3631700.3664910(516-522)Online publication date: 27-Jun-2024
    • (2024)One vs. Many: Comprehending Accurate Information from Multiple Erroneous and Inconsistent AI GenerationsProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3662681(2518-2531)Online publication date: 3-Jun-2024
    • Show More Cited By

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media