Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1007/978-3-030-49760-6_4guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

What Are People Doing About XAI User Experience? A Survey on AI Explainability Research and Practice

Published: 19 July 2020 Publication History

Abstract

Explainability is a hot topic nowadays for artificial intelligent (AI) systems. The role of machine learning (ML) models on influencing human decisions shed light on the back-box of computing systems. AI based system are more than just ML models. ML models are one element for the AI explainability’ design and needs to be combined with other elements so it can have significant meaning for people using AI systems. There are different goals and motivations for AI explainability. Regardless the goal for AI explainability, there are more to AI explanation than just ML models or algorithms. The explainability of an AI systems behavior needs to consider different dimensions: 1) who is the receiver of that explanation, 2) why that explanation is needed, and 3) in which context and other situated information the explanation is presented. Considering those three dimensions, the explanation can be effective by fitting the user needs and expectation in the right moment and format. The design of an AI explanation user experience is central for the pressing need from people and the society to understand how an AI system may impact on human decisions. In this paper, we present a literature review on AI explainability research and practices. We first looked at the computer science (CS) community research to identify the main research themes about AI explainability, or “explainable AI”. Then, we focus on Human-Computer Interaction (HCI) research trying to answer three questions about the selected publications: to whom the AI explainability is for (who), which is the purpose of the AI explanation (why), and in which context the AI explanation is presented (what + when).

References

[1]
Apicella, A., Isgro, F., Prevete, R., Tamburrini, G., Vietri, A.: Sparse dictionaries for the explanation of classification systems. In: PIE, p. 009 (2015)
[2]
Barria-Pineda, J., Brusilovsky, P.: Making educational recommendations transparent through a fine-grained open learner model. In: IUI Workshops (2019)
[3]
Belle, V.: Logic meets probability: towards explainable AI systems for uncertain worlds. In: IJCAI, pp. 5116–5120 (2017)
[4]
Benjamin, J.J., Müller-Birn, C.: Materializing interpretability: exploring meaning in algorithmic systems. In: Companion Publication of the 2019 on Designing Interactive Systems Conference 2019 Companion, pp. 123–127. ACM (2019)
[5]
Bhatia, A., Garg, V., Haves, P., Pudi, V.: Explainable clustering using hyper-rectangles for building energy simulation data. In: IOP Conference Series: Earth and Environmental Science, vol. 238, p. 012068. IOP Publishing (2019)
[6]
Browne, J.T.: Wizard of OZ prototyping for machine learning experiences. In: Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, p. LBW2621. ACM (2019)
[7]
Cabitza F, Campagner A, and Ciucci D Holzinger A, Kieseberg P, Tjoa AM, and Weippl E New frontiers in explainable AI: understanding the GI to interpret the GO Machine Learning and Knowledge Extraction 2019 Cham Springer 27-47
[8]
Cai, C.J., Jongejan, J., Holbrook, J.: The effects of example-based explanations in a machine learning interface. In: Proceedings of the 24th International Conference on Intelligent User Interfaces, pp. 258–262. ACM (2019)
[9]
Chander, A., Srinivasan, R., Chelian, S., Wang, J., Uchino, K.: Working with beliefs: AI transparency in the enterprise. In: IUI Workshops (2018)
[10]
Charleer, S., Gutiérrez, F., Verbert, K.: Supporting job mediator and job seeker through an actionable dashboard. In: Proceedings of the 24th International Conference on Intelligent User Interfaces, pp. 121–131 (2019)
[11]
Chen, L., Wang, F.: Explaining recommendations based on feature sentiments in product reviews. In: Proceedings of the 22nd International Conference on Intelligent User Interfaces, pp. 17–28. ACM (2017)
[12]
Cheng, H.F., et al.: Explaining decision-making algorithms through UI: strategies to help non-expert stakeholders. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, p. 559. ACM (2019)
[13]
Chromik, M., Eiband, M., Völkel, S.T., Buschek, D.: Dark patterns of explainability, transparency, and user control for intelligent systems. In: IUI Workshops (2019)
[14]
Clewley, N., Dodd, L., Smy, V., Witheridge, A., Louvieris, P.: Eliciting expert knowledge to inform training design. In: Proceedings of the 31st European Conference on Cognitive Ergonomics, pp. 138–143 (2019)
[15]
Datta, A., Sen, S., Zick, Y.: Algorithmic transparency via quantitative input influence: theory and experiments with learning systems. In: 2016 IEEE Symposium on Security and Privacy (SP), pp. 598–617. IEEE (2016)
[16]
Di Castro, F., Bertini, E.: Surrogate decision tree visualization interpreting and visualizing black-box classification models with surrogate decision tree. In: CEUR Workshop Proceedings, vol. 2327 (2019)
[17]
Dimitrova, R., Majumdar, R., Prabhu, V.S.: Causality analysis for concurrent reactive systems. arXiv preprint arXiv:1901.00589 (2019)
[18]
Ding L Human knowledge in constructing AI systems-neural logic networks approach towards an explainable AI Procedia Comput. Sci. 2018 126 1561-1570
[19]
Dodge, J., Liao, Q.V., Zhang, Y., Bellamy, R.K., Dugan, C.: Explaining models: an empirical study of how explanations impact fairness judgment. In: Proceedings of the 24th International Conference on Intelligent User Interfaces, pp. 275–285. ACM (2019)
[20]
Dodge, J., Penney, S., Anderson, A., Burnett, M.M.: What should be in an XAI explanation? what IFT reveals. In: IUI Workshops (2018)
[21]
Dominguez, V., Messina, P., Donoso-Guzmán, I., Parra, D.: The effect of explanations and algorithmic accuracy on visual recommender systems of artistic images. In: Proceedings of the 24th International Conference on Intelligent User Interfaces, pp. 408–416. ACM (2019)
[22]
Ehsan, U., Tambwekar, P., Chan, L., Harrison, B., Riedl, M.O.: Automated rationale generation: a technique for explainable AI and its effects on human perceptions. In: Proceedings of the 24th International Conference on Intelligent User Interfaces, pp. 263–274. ACM (2019)
[23]
Eiband, M., Buschek, D., Kremer, A., Hussmann, H.: The impact of placebic explanations on trust in intelligent systems. In: Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, p. LBW0243. ACM (2019)
[24]
Eiband, M., Schneider, H., Buschek, D.: Normative vs. pragmatic: two perspectives on the design of explanations in intelligent systems. In: IUI Workshops (2018)
[25]
Eisenstadt V, Espinoza-Stapelfeld C, Mikyas A, and Althoff K-D Cox MT, Funk P, and Begum S Explainable distributed case-based support systems: patterns for enhancement and validation of design recommendations Case-Based Reasoning Research and Development 2018 Cham Springer 78-94
[26]
Eisenstadt V, Langenhan C, and Althoff K-D Bach K and Marling C FLEA-CBR – a flexible alternative to the classic 4R cycle of case-based reasoning Case-Based Reasoning Research and Development 2019 Cham Springer 49-63
[27]
Eljasik-Swoboda, T., Engel, F., Hemmje, M.: Using topic specific features for argument stance recognition
[28]
Escalante, H.J., et al.: Design of an explainable machine learning challenge for video interviews. In: 2017 International Joint Conference on Neural Networks (IJCNN), pp. 3688–3695. IEEE (2017)
[29]
Finkbeiner, B., Kleinberg, S.: Proceedings 3rd workshop on formal reasoning about causation, responsibility, and explanations in science and technology. arXiv preprint arXiv:1901.00073 (2019)
[30]
Garcia R, Telea AC, da Silva BC, Tørresen J, and Comba JLD A task-and-technique centered survey on visual analytics for deep learning model engineering Comput. Graph. 2018 77 30-49
[31]
Gervasio, M.T., Myers, K.L., Yeh, E., Adkins, B.: Explanation to avert surprise. In: IUI Workshops, vol. 2068 (2018)
[32]
Goebel R et al. Holzinger A, Kieseberg P, Tjoa AM, Weippl E, et al. Explainable AI: the new 42? Machine Learning and Knowledge Extraction 2018 Cham Springer 295-303
[33]
Gorzałczany MB and Rudziński F Interpretable and accurate medical data classification-a multi-objective genetic-fuzzy optimization approach Expert Syst. Appl. 2017 71 26-39
[34]
Grigsby SS Schmorrow DD and Fidopiastis CM Artificial intelligence for advanced human-machine symbiosis Augmented Cognition: Intelligent Technologies 2018 Cham Springer 255-266
[35]
Guo, K., Pratt, D., MacDonald III, A., Schrater, P.: Labeling images by interpretation from natural viewing. In: IUI Workshops (2018)
[36]
Guzdial, M., Reno, J., Chen, J., Smith, G., Riedl, M.: Explainable PCGML via game design patterns. arXiv preprint arXiv:1809.09419 (2018)
[37]
Hamidi-Haines, M., Qi, Z., Fern, A., Li, F., Tadepalli, P.: Interactive naming for explaining deep neural networks: a formative study. arXiv preprint arXiv:1812.07150 (2018)
[38]
Hepenstal, S., Kodagoda, N., Zhang, L., Paudyal, P., Wong, B.W.: Algorithmic transparency of conversational agents. In: IUI Workshops (2019)
[39]
Hohman, F., Head, A., Caruana, R., DeLine, R., Drucker, S.M.: Gamut: a design probe to understand how data scientists understand machine learning models. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, p. 579. ACM (2019)
[40]
Hohman FM, Kahng M, Pienta R, and Chau DH Visual analytics in deep learning: an interrogative survey for the next frontiers IEEE Trans. Vis. Comput. Graph. 2018 25 8 2674-2693
[41]
Ishii K Comparative legal study on privacy and personal data protection for robots equipped with artificial intelligence: looking at functional and technological aspects AI Soc. 2017 34 1-25
[42]
Jain, A., Keller, J., Popescu, M.: Explainable AI for dataset comparison. In: 2019 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), pp. 1–7. IEEE (2019)
[43]
Jentzsch SF, Höhn S, and Hochgeschwender N Calvaresi D, Najjar A, Schumacher M, and Främling K Conversational interfaces for explainable AI: a human-centred approach Explainable, Transparent Autonomous Agents and Multi-Agent Systems 2019 Cham Springer 77-92
[44]
Kampik T, Nieves JC, and Lindgren H Calvaresi D, Najjar A, Schumacher M, and Främling K Explaining sympathetic actions of rational agents Explainable, Transparent Autonomous Agents and Multi-Agent Systems 2019 Cham Springer 59-76
[45]
Kizilcec, R.F.: How much information?: Effects of transparency on trust in an algorithmic interface. In: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pp. 2390–2395. ACM (2016)
[46]
Krebs, L.M., et al.: Tell me what you know: GDPR implications on designing transparency and accountability for news recommender systems. In: Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, p. LBW2610. ACM (2019)
[47]
Krishnan, J., Coronado, P., Reed, T.: SEVA: a systems engineer’s virtual assistant. In: AAAI Spring Symposium: Combining Machine Learning with Knowledge Engineering (2019)
[48]
Kwon BC et al. RetainVis: visual analytics with interpretable and interactive recurrent neural networks on electronic medical records IEEE Trans. Vis. Comput. Graph. 2018 25 1 299-309
[49]
Lee, O.J., Jung, J.J.: Explainable movie recommendation systems by using story-based similarity. In: IUI Workshops (2018)
[50]
Lim, B.Y., Wang, D., Loh, T.P., Ngiam, K.Y.: Interpreting intelligibility under uncertain data imputation. In: IUI Workshops (2018)
[51]
Lim, B.Y., Yang, Q., Abdul, A.M., Wang, D.: Why these explanations? selecting intelligibility types for explanation goals. In: IUI Workshops (2019)
[52]
Loi, D., Wolf, C.T., Blomberg, J.L., Arar, R., Brereton, M.: Co-designing AI futures: Integrating AI ethics, social computing, and design. In: A Companion Publication of the 2019 on Designing Interactive Systems Conference 2019 Companion, pp. 381–384. ACM (2019)
[53]
Magdalena L Semantic interpretability in hierarchical fuzzy systems: creating semantically decouplable hierarchies Inf. Sci. 2019 496 109-123
[54]
Meacham S, Isaac G, Nauck D, and Virginas B Arai K, Bhatia R, and Kapoor S Towards explainable AI: design and development for explanation of machine learning predictions for a patient readmittance medical application Intelligent Computing 2019 Cham Springer 939-955
[55]
Millecamp, M., Htun, N.N., Conati, C., Verbert, K.: To explain or not to explain: the effects of personal characteristics when explaining music recommendations. In: IUI, pp. 397–407 (2019)
[56]
Ming Y, Qu H, and Bertini E RuleMatrix: visualizing and understanding classifiers with rules IEEE Trans. Vis. Comput. Graph. 2018 25 1 342-352
[57]
Montavon G, Samek W, and Müller KR Methods for interpreting and understanding deep neural networks Digit. Signal Proc. 2018 73 1-15
[58]
Montenegro JLZ, da Costa CA, and Righi RDR Survey of conversational agents in health Expert Syst. Appl. 2019 129 56-67 http://www.sciencedirect.com/science/article/pii/S0957417419302283
[59]
Nassar, M., Salah, K., ur Rehman, M.H., Svetinovic, D.: Blockchain for explainable and trustworthy artificial intelligence. Wiley Interdisc. Rev.: Data Min. Knowl. Discovery 10(1), e1340 (2020)
[60]
Neerincx MA, van der Waa J, Kaptein F, and van Diggelen J Harris D Using perceptual and cognitive explanations for enhanced human-agent team performance Engineering Psychology and Cognitive Ergonomics 2018 Cham Springer 204-214
[61]
Nguyen, A.T., et al.: Believe it or not: designing a human-AI partnership for mixed-initiative fact-checking. In: The 31st Annual ACM Symposium on User Interface Software and Technology, pp. 189–199. ACM (2018)
[62]
Nguyen, A.T., Lease, M., Wallace, B.C.: Explainable modeling of annotations in crowdsourcing. In: IUI, pp. 575–579 (2019)
[63]
Nguyen, A.T., Lease, M., Wallace, B.C.: Mash: software tools for developing interactive and transparent machine learning systems. In: IUI Workshops (2019)
[64]
Nunes I and Jannach D A systematic review and taxonomy of explanations in decision support and recommender systems User Model. User-Adap. Inter. 2017 27 3–5 393-444
[65]
Olszewska, J.I.: Designing transparent and autonomous intelligent vision systems. In: Proceedings of the International Conference on Agents and Artificial Intelligence (ICAART), pp. 850–856 (2019)
[66]
van Oosterhout, A.: Understanding the benefits and drawbacks of shape change in contrast or addition to other modalities. In: Companion Publication of the 2019 on Designing Interactive Systems Conference 2019 Companion, pp. 113–116. ACM (2019)
[67]
van Otterlo, M., Atzmueller, M.: On requirements and design criteria for explainability in legal AI (2018)
[68]
Paudyal, P., Lee, J., Kamzin, A., Soudki, M., Banerjee, A., Gupta, S.K.: Learn2sign: explainable AI for sign language learning. In: IUI Workshops (2019)
[69]
Petersen, K., Feldt, R., Mujtaba, S., Mattsson, M.: Systematic mapping studies in software engineering. In: Ease, vol. 8, pp. 68–77 (2008)
[70]
Ribera, M., Lapedriza, À.: Can we do better explanations? A proposal of user-centered explainable AI. In: IUI Workshops (2019)
[71]
Rotsidis, A., Theodorou, A., Wortham, R.H.: Robots that make sense: transparent intelligence through augmented reality. In: IUI Workshops (2019)
[72]
Santos, T.I., Abel, A.: Using feature visualisation for explaining deep learning models in visual speech. In: 2019 IEEE 4th International Conference on Big Data Analytics (ICBDA), pp. 231–235, March 2019.
[73]
Schmidmaier, M., Han, Z., Weber, T., Liu, Y., Hußmann, H.: Real-time personalization in adaptive ides (2019)
[74]
Schuessler, M., Weiß, P.: Minimalistic explanations: capturing the essence of decisions. arXiv preprint arXiv:1905.02994 (2019)
[75]
Sellam, T., Lin, K., Huang, I., Yang, M., Vondrick, C., Wu, E.: DeepBase: deep inspection of neural networks. In: Proceedings of the 2019 International Conference on Management of Data, pp. 1117–1134 (2019)
[76]
Singh, M., Martins, L.M., Joanis, P., Mago, V.K.: Building a cardiovascular disease predictive model using structural equation model & fuzzy cognitive map. In: 2016 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), pp. 1377–1382. IEEE (2016)
[77]
Sliwinski, J., Strobel, M., Zick, Y.: An axiomatic approach to linear explanations in data classification. In: IUI Workshops (2018)
[78]
Smith, A., Nolan, J.: The problem of explanations without user feedback. In: IUI Workshops (2018)
[79]
Smith-Renner, A., Rua, R., Colony, M.: Towards an explainable threat detection tool. In: IUI Workshops (2019)
[80]
Sokol, K., Flach, P.A.: Conversational explanations of machine learning predictions through class-contrastive counterfactual statements. In: IJCAI, pp. 5785–5786 (2018)
[81]
Springer, A., Whittaker, S.: Progressive disclosure: designing for effective transparency. arXiv preprint arXiv:1811.02164 (2018)
[82]
Stumpf, S.: Horses for courses: making the case for persuasive engagement in smart systems. In: Joint Proceedings of the ACM IUI 2019 Workshops, vol. 2327. CEUR (2019)
[83]
Stumpf, S., Skrebe, S., Aymer, G., Hobson, J.: Explaining smart heating systems to discourage fiddling with optimized behavior. In: CEUR Workshop Proceedings, vol. 2068 (2018)
[84]
Sundararajan, M., Xu, J., Taly, A., Sayres, R., Najmi, A.: Exploring principled visualizations for deep network attributions. In: IUI Workshops (2019)
[85]
Theodorou A, Wortham RH, and Bryson JJ Designing and implementing transparency for real time inspection of autonomous robots Connect. Sci. 2017 29 3 230-241
[86]
Tsai, C.H., Brusilovsky, P.: Explaining social recommendations to casual users: design principles and opportunities. In: Proceedings of the 23rd International Conference on Intelligent User Interfaces Companion, p. 59. ACM (2018)
[87]
Tsai, C.H., Brusilovsky, P.: Designing explanation interfaces for transparency and beyond. In: IUI Workshops (2019)
[88]
Vellido, A.: The importance of interpretability and visualization in machine learning for applications in medicine and health care. Neural Comput. Appl. 1–15 (2019)
[89]
Vijay, A., Umadevi, K.: Secured AI guided architecture for D2D systems of massive MIMO deployed in 5G networks. In: 2019 3rd International Conference on Trends in Electronics and Informatics (ICOEI), pp. 468–472. IEEE (2019)
[90]
Vorm, E.S., Miller, A.D.: Assessing the value of transparency in recommender systems: an end-user perspective (2018)
[91]
Wang, D., Yang, Q., Abdul, A., Lim, B.Y.: Designing theory-driven user-centric explainable AI. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, p. 601. ACM (2019)
[92]
Wang, Q., et al.: ATMSeer: increasing transparency and controllability in automated machine learning. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, p. 681. ACM (2019)
[93]
Wang, X., Chen, Y., Yang, J., Wu, L., Wu, Z., Xie, X.: A reinforcement learning framework for explainable recommendation. In: 2018 IEEE International Conference on Data Mining (ICDM), pp. 587–596. IEEE (2018)
[94]
Wolf, C.T., Blomberg, J.: Explainability in context: lessons from an intelligent system in the it services domain. In: IUI Workshops (2019)
[95]
Xie, Y., Gao, G., Chen, X.: Outlining the design space of explainable intelligent systems for medical diagnosis. arXiv preprint arXiv:1902.06019 (2019)
[96]
Yang, Q., Banovic, N., Zimmerman, J.: Mapping machine learning advances from HCI research to reveal starting places for design innovation. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, p. 130. ACM (2018)
[97]
Yeganejou, M., Dick, S.: Improved deep fuzzy clustering for accurate and interpretable classifiers. In: 2019 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), pp. 1–7. IEEE (2019)
[98]
Zhao, R., Benbasat, I., Cavusoglu, H.: Transparency in advice-giving systems: a framework and a research model for transparency provision. In: IUI Workshops (2019)
[99]
Zheng, X.l., Zhu, M.Y., Li, Q.B., Chen, C.C., Tan, Y.C.: FinBrain: when finance meets AI 2.0. Front. Inf. Technol. Electron. Eng. 20(7), 914–924 (2019)
[100]
Zhou, J., et al.: Effects of influence on user trust in predictive decision making. In: Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1–6 (2019)
[101]
Zhu, J., Liapis, A., Risi, S., Bidarra, R., Youngblood, G.M.: Explainable AI for designers: a human-centered perspective on mixed-initiative co-creation. In: 2018 IEEE Conference on Computational Intelligence and Games (CIG), pp. 1–8. IEEE (2018)

Cited By

View all
  • (2024)The X Factor: On the Relationship between User eXperience and eXplainabilityProceedings of the 13th Nordic Conference on Human-Computer Interaction10.1145/3679318.3685352(1-12)Online publication date: 13-Oct-2024
  • (2024)Generating Context-Aware Contrastive Explanations in Rule-based SystemsProceedings of the 2024 Workshop on Explainability Engineering10.1145/3648505.3648507(8-14)Online publication date: 20-Apr-2024
  • (2024)A comprehensive taxonomy for explainable artificial intelligence: a systematic survey of surveys on methods and conceptsData Mining and Knowledge Discovery10.1007/s10618-022-00867-838:5(3043-3101)Online publication date: 1-Sep-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Guide Proceedings
Design, User Experience, and Usability. Design for Contemporary Interactive Environments: 9th International Conference, DUXU 2020, Held as Part of the 22nd HCI International Conference, HCII 2020, Copenhagen, Denmark, July 19–24, 2020, Proceedings, Part II
Jul 2020
743 pages
ISBN:978-3-030-49759-0
DOI:10.1007/978-3-030-49760-6
  • Editors:
  • Aaron Marcus,
  • Elizabeth Rosenzweig

Publisher

Springer-Verlag

Berlin, Heidelberg

Publication History

Published: 19 July 2020

Author Tags

  1. AI explainability
  2. Explainable AI
  3. Artificial Intelligence design
  4. AI for UX

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 19 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)The X Factor: On the Relationship between User eXperience and eXplainabilityProceedings of the 13th Nordic Conference on Human-Computer Interaction10.1145/3679318.3685352(1-12)Online publication date: 13-Oct-2024
  • (2024)Generating Context-Aware Contrastive Explanations in Rule-based SystemsProceedings of the 2024 Workshop on Explainability Engineering10.1145/3648505.3648507(8-14)Online publication date: 20-Apr-2024
  • (2024)A comprehensive taxonomy for explainable artificial intelligence: a systematic survey of surveys on methods and conceptsData Mining and Knowledge Discovery10.1007/s10618-022-00867-838:5(3043-3101)Online publication date: 1-Sep-2024
  • (2024)Towards a Framework for Interdisciplinary Studies in Explainable Artificial IntelligenceArtificial Intelligence in HCI10.1007/978-3-031-60606-9_18(316-333)Online publication date: 29-Jun-2024
  • (2024)What Is the Focus of XAI in UI Design? Prioritizing UI Design Principles for Enhancing XAI User ExperienceArtificial Intelligence in HCI10.1007/978-3-031-60606-9_13(219-237)Online publication date: 29-Jun-2024
  • (2024)Exploring the Impact of Explainability on Trust and Acceptance of Conversational Agents – A Wizard of Oz StudyArtificial Intelligence in HCI10.1007/978-3-031-60606-9_12(199-218)Online publication date: 29-Jun-2024
  • (2023)FATE in AI: Towards Algorithmic Inclusivity and AccessibilityProceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization10.1145/3617694.3623233(1-14)Online publication date: 30-Oct-2023
  • (2023)The Role of Explainable AI in the Research Field of AI EthicsACM Transactions on Interactive Intelligent Systems10.1145/359997413:4(1-39)Online publication date: 8-Dec-2023
  • (2023)Questioning the ability of feature-based explanations to empower non-experts in robo-advised financial decision-makingProceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency10.1145/3593013.3594053(943-958)Online publication date: 12-Jun-2023
  • (2023)Explainable Convolutional Neural Networks: A Taxonomy, Review, and Future DirectionsACM Computing Surveys10.1145/356369155:10(1-37)Online publication date: 2-Feb-2023
  • Show More Cited By

View Options

View options

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media