Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- surveyJanuary 2025
Can Graph Neural Networks be Adequately Explained? A Survey
ACM Computing Surveys (CSUR), Volume 57, Issue 5Article No.: 131, Pages 1–36https://doi.org/10.1145/3711122To address the barrier caused by the black-box nature of Deep Learning (DL) for practical deployment, eXplainable Artificial Intelligence (XAI) has emerged and is developing rapidly. While significant progress has been made in explanation techniques for ...
- research-articleJanuary 2025
Explaining Neural News Recommendation with Attributions onto Reading Histories
ACM Transactions on Intelligent Systems and Technology (TIST), Volume 16, Issue 1Article No.: 7, Pages 1–25https://doi.org/10.1145/3673233An important aspect of responsible recommendation systems is the transparency of the prediction mechanisms. This is a general challenge for deep-learning-based systems such as the currently predominant neural news recommender architectures, which are ...
- research-articleNovember 2024
Interpretation with baseline shapley value for feature groups on tree models
Frontiers of Computer Science: Selected Publications from Chinese Universities (FCS), Volume 19, Issue 5https://doi.org/10.1007/s11704-024-40117-2AbstractTree models have made an impressive progress during the past years, while an important problem is to understand how these models predict, in particular for critical applications such as finance and medicine. For this issue, most previous works ...
- tutorialNovember 2024
A Practical Tutorial on Explainable AI Techniques
- Adrien Bennetot,
- Ivan Donadello,
- Ayoub El Qadi El Haouari,
- Mauro Dragoni,
- Thomas Frossard,
- Benedikt Wagner,
- Anna Sarranti,
- Silvia Tulli,
- Maria Trocan,
- Raja Chatila,
- Andreas Holzinger,
- Artur Davila Garcez,
- Natalia Díaz-Rodríguez
ACM Computing Surveys (CSUR), Volume 57, Issue 2Article No.: 50, Pages 1–44https://doi.org/10.1145/3670685The past years have been characterized by an upsurge in opaque automatic decision support systems, such as Deep Neural Networks (DNNs). Although DNNs have great generalization and prediction abilities, it is difficult to obtain detailed explanations for ...
- abstractOctober 2024
MIRACLE: An Online, Explainable Multimodal Interactive Concept Learning System
- Ansel Blume,
- Khanh Duy Nguyen,
- Zhenhailong Wang,
- Yangyi Chen,
- Michal Shlapentokh-Rothman,
- Xiaomeng Jin,
- Jeonghwan Kim,
- Zhen Zhu,
- Jiateng Liu,
- Kuan-Hao Huang,
- Mankeerat Sidhu,
- Xuanming Zhang,
- Vivian Liu,
- Raunak Sinha,
- Te-Lin Wu,
- Abhay Zala,
- Elias Stengel-Eskin,
- Da Yin,
- Yao Xiao,
- Utkarsh Mall,
- Zhou Yu,
- Kai-Wei Chang,
- Camille Cobb,
- Karrie Karahalios,
- Lydia Chilton,
- Mohit Bansal,
- Nanyun Peng,
- Carl Vondrick,
- Derek Hoiem,
- Heng Ji
MM '24: Proceedings of the 32nd ACM International Conference on MultimediaPages 11252–11254https://doi.org/10.1145/3664647.3684993We present MIRACLE, a system for online, interpretable visual concept and video action recognition. Through a chat interface, users query the recognition system with an uploaded image or video. For images, MIRACLE returns concept predictions from its ...
-
- research-articleOctober 2024
Peeling Back the Layers: Interpreting the Storytelling of ViT
MM '24: Proceedings of the 32nd ACM International Conference on MultimediaPages 7298–7306https://doi.org/10.1145/3664647.3681712By integrating various modules with the Visual Transformer (ViT), we facilitate a interpretation of image processing across each layer and attention head. This method allows us to explore the connections both within and across the layers, enabling a ...
- research-articleOctober 2024
Siformer: Feature-isolated Transformer for Efficient Skeleton-based Sign Language Recognition
MM '24: Proceedings of the 32nd ACM International Conference on MultimediaPages 9387–9396https://doi.org/10.1145/3664647.3681578Sign language recognition (SLR) refers to interpreting sign language glosses from given videos automatically. This research area presents a complex challenge in computer vision because of the rapid and intricate movements inherent in sign languages, ...
- research-articleOctober 2024
Enhancing Model Interpretability with Local Attribution over Global Exploration
MM '24: Proceedings of the 32nd ACM International Conference on MultimediaPages 5347–5355https://doi.org/10.1145/3664647.3681385In the field of artificial intelligence, AI models are frequently described as 'black boxes' due to the obscurity of their internal mechanisms. It has ignited research interest on model interpretability, especially in attribution methods that offers ...
- research-articleOctober 2024
Prompt-Guided Image-Adaptive Neural Implicit Lookup Tables for Interpretable Image Enhancement
MM '24: Proceedings of the 32nd ACM International Conference on MultimediaPages 6463–6471https://doi.org/10.1145/3664647.3680743In this paper, we delve into the concept of interpretable image enhancement, a technique that enhances image quality by adjusting filter parameters with easily understandable names such as "Exposure'' and "Contrast''. Unlike using predefined image ...
- research-articleOctober 2024
Redundancy and Concept Analysis for Code-trained Language Models
ASEW '24: Proceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering WorkshopsPages 24–34https://doi.org/10.1145/3691621.3694931Code-trained language models have proven to be highly effective for various code intelligence tasks. However, they can be challenging to train and deploy for many software engineering applications due to computational bottlenecks and memory constraints. ...
- research-articleOctober 2024
SLIM: a Scalable and Interpretable Light-weight Fault Localization Algorithm for Imbalanced Data in Microservice
ASE '24: Proceedings of the 39th IEEE/ACM International Conference on Automated Software EngineeringPages 27–39https://doi.org/10.1145/3691620.3694984In real-world microservice systems, the newly deployed service - one kind of change service, could lead to a new type of minority fault. Existing state-of-the-art (SOTA) methods for fault localization rarely consider the imbalanced fault classification ...
- research-articleOctober 2024
"Reasoning before Responding": Towards Legal Long-form Question Answering with Interpretability
CIKM '24: Proceedings of the 33rd ACM International Conference on Information and Knowledge ManagementPages 4922–4930https://doi.org/10.1145/3627673.3680082Long-Form Question Answering (LFQA) represents a growing interest in Legal Natural Language Processing (Legal-NLP) as many individuals encounter legal disputes at some point in their lives, but lack of knowledge about how to negotiate these complex ...
- short-paperOctober 2024
SurvReLU: Inherently Interpretable Survival Analysis via Deep ReLU Networks
CIKM '24: Proceedings of the 33rd ACM International Conference on Information and Knowledge ManagementPages 4081–4085https://doi.org/10.1145/3627673.3679947Survival analysis models time-to-event distributions with censorship. Recently, deep survival models using neural networks have dominated due to their representational power and state-of-the-art performance. However, their "black-box" nature hinders ...
- research-articleOctober 2024
Semantic Prototypes: Enhancing Transparency without Black Boxes
CIKM '24: Proceedings of the 33rd ACM International Conference on Information and Knowledge ManagementPages 1680–1688https://doi.org/10.1145/3627673.3679795As machine learning (ML) models and datasets increase in complexity, the demand for methods that enhance explainability and interpretability becomes paramount. Prototypes, by encapsulating essential characteristics within data, offer insights that enable ...
- research-articleOctober 2024
Out-of-Distribution Aware Classification for Tabular Data
CIKM '24: Proceedings of the 33rd ACM International Conference on Information and Knowledge ManagementPages 65–75https://doi.org/10.1145/3627673.3679755Out-of-distribution (OOD) aware classification aims to classify in-distribution samples into their respective classes while simultaneously detecting OOD samples. Previous works have largely focused on the image domain, where images from an unrelated ...
- research-articleOctober 2024
Leveraging Local Structure for Improving Model Explanations: An Information Propagation Approach
CIKM '24: Proceedings of the 33rd ACM International Conference on Information and Knowledge ManagementPages 2890–2899https://doi.org/10.1145/3627673.3679575Numerous explanation methods have been recently developed to interpret the decisions made by deep neural network (DNN) models. For image classifiers, these methods typically provide an attribution score to each pixel in the image to quantify its ...
- research-articleOctober 2024
Causal Probing for Dual Encoders
CIKM '24: Proceedings of the 33rd ACM International Conference on Information and Knowledge ManagementPages 2292–2303https://doi.org/10.1145/3627673.3679556Dual encoders are highly effective and widely deployed in the retrieval phase for passage and document ranking, question answering, or retrieval-augmented generation (RAG) setups. Most dual-encoder models use transformer models like BERT to map input ...
- short-paperOctober 2024
Knowledge Graphs for Responsible AI
CIKM '24: Proceedings of the 33rd ACM International Conference on Information and Knowledge ManagementPages 5596–5598https://doi.org/10.1145/3627673.3679085Responsible AI is built upon a set of principles that prioritize fairness, transparency, accountability, and inclusivity in AI development and deployment. As AI systems become increasingly sophisticated, including the explosion of generative AI, there is ...
- research-articleOctober 2024
VIME: Visual Interactive Model Explorer for Identifying Capabilities and Limitations of Machine Learning Models for Sequential Decision-Making
UIST '24: Proceedings of the 37th Annual ACM Symposium on User Interface Software and TechnologyArticle No.: 59, Pages 1–21https://doi.org/10.1145/3654777.3676323Ensuring that Machine Learning (ML) models make correct and meaningful inferences is necessary for the broader adoption of such models into high-stakes decision-making scenarios. Thus, ML model engineers increasingly use eXplainable AI (XAI) tools to ...
- ArticleOctober 2024
ThyGraph: A Graph-Based Approach for Thyroid Nodule Diagnosis from Ultrasound Studies
- Ashwath Radhachandran,
- Alekhya Vittalam,
- Vedrana Ivezic,
- Vivek Sant,
- Shreeram Athreya,
- Chace Moleta,
- Maitraya Patel,
- Rinat Masamed,
- Corey Arnold,
- William Speier
Medical Image Computing and Computer Assisted Intervention – MICCAI 2024Pages 753–763https://doi.org/10.1007/978-3-031-72083-3_70AbstractImproved thyroid nodule risk stratification from ultrasound (US) can mitigate overdiagnosis and unnecessary biopsies. Previous studies often train deep learning models using manually selected single US frames; these approaches deviate from ...