Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- research-articleJanuary 2025
Enhancing Intrusion Detection Systems With Advanced Machine Learning Techniques: An Ensemble and Explainable Artificial Intelligence (AI) Approach
ABSTRACTThe increasing sophistication of cyber threats necessitates advancements in intrusion detection systems (IDS). This research introduces a novel IDS framework that integrates advanced machine learning (ML) techniques, including ensemble learning, ...
- research-articleDecember 2024
Exploring Dataset Bias and Scaling Techniques in Multi-Source Gait Biomechanics: An Explainable Machine Learning Approach
- research-articleDecember 2024
DOMAIN: Explainable Credibility Assessment Tools for Empowering Online Readers Coping with Misinformation
ACM Transactions on the Web (TWEB), Volume 19, Issue 1Article No.: 3, Pages 1–31https://doi.org/10.1145/3696472Despite all the fact-checking initiatives on news and social media aimed at countering misinformation, they remain insufficient to promptly address the wide array of misleading information disseminated by both news and social media outlets. Rather than ...
- introductionNovember 2024
Multimodal Co-Construction of Explanations with XAI Workshop
ICMI '24: Proceedings of the 26th International Conference on Multimodal InteractionPages 698–699https://doi.org/10.1145/3678957.3689205The ICMI 2024 workshop on “Multimodal Co-Construction of Explanations with XAI” bridges the fields of Explainable Artificial Intelligence (XAI) and Multimodal Interaction, focusing on the recent perspective that effective AI explanations should be ...
- short-paperOctober 2024
EDGE: Evaluation Framework for Logical vs. Subgraph Explanations for Node Classifiers on Knowledge Graphs
CIKM '24: Proceedings of the 33rd ACM International Conference on Information and Knowledge ManagementPages 4026–4030https://doi.org/10.1145/3627673.3679904As machine learning and deep learning become increasingly integrated into our daily lives, understanding how these technologies make decisions is crucial. To ensure transparency, accountability, and ethical adherence, these so-called "black-box" models ...
-
- research-articleOctober 2024
Correcting Biases of Shapley Value Attributions for Informative Machine Learning Model Explanations
CIKM '24: Proceedings of the 33rd ACM International Conference on Information and Knowledge ManagementPages 3331–3340https://doi.org/10.1145/3627673.3679846Shapley value attribution (SVA) is an increasingly popular Explainable AI (XAI) approach that has been widely used in many recent applied studies to gain new insights into the underlying information systems. However, most existing SVA methods are error-...
- research-articleOctober 2024
Probabilistic Path Integration with Mixture of Baseline Distributions
CIKM '24: Proceedings of the 33rd ACM International Conference on Information and Knowledge ManagementPages 570–580https://doi.org/10.1145/3627673.3679641Path integration methods generate attributions by integrating along a trajectory from a baseline to the input. These techniques have demonstrated considerable effectiveness in the field of explainability research. While multiple types of baselines for ...
- research-articleOctober 2024
HiLite: Hierarchical Level-implemented Architecture Attaining Part-Whole Interpretability
CIKM '24: Proceedings of the 33rd ACM International Conference on Information and Knowledge ManagementPages 983–993https://doi.org/10.1145/3627673.3679538Beyond the traditional CNN structure, we have recently witnessed lots of breakthroughs in computer vision architectures such as Vision Transformer, MLP-Mixer, SNN-MLP, and so on. However, many efforts in developing novel architectures for vision tasks ...
- short-paperOctober 2024
XplainScreen: Unveiling the Black Box of Graph Neural Network Drug Screening Models with a Unified XAI Framework
CIKM '24: Proceedings of the 33rd ACM International Conference on Information and Knowledge ManagementPages 5164–5168https://doi.org/10.1145/3627673.3679236Despite the powerful capabilities of GNN-based drug screening model in predicting target drug properties, the black-box nature of these models poses a challenge for practical application, particularly in a field as critical as drug development where ...
- research-articleOctober 2024
DIRECT: Dual Interpretable Recommendation with Multi-aspect Word Attribution
ACM Transactions on Intelligent Systems and Technology (TIST), Volume 15, Issue 5Article No.: 97, Pages 1–21https://doi.org/10.1145/3663483Recommending products to users with intuitive explanations helps improve the system in transparency, persuasiveness, and satisfaction. Existing interpretation techniques include post hoc methods and interpretable modeling. The former category could ...
- extended-abstractOctober 2024
2nd Workshop on Cars As Social Agents (CarSA): Interdisciplinary Perspectives on Human–Vehicle Interaction
NordiCHI '24 Adjunct: Adjunct Proceedings of the 2024 Nordic Conference on Human-Computer InteractionArticle No.: 45, Pages 1–3https://doi.org/10.1145/3677045.3685460As autonomous vehicles (AVs) emerge as new agents in traffic, it becomes imperative to understand how they can be introduced effectively and acceptably. AVs enter existing socio-technical systems, where members of various social communities are already ...
- ArticleOctober 2024
Deep Learning for Cancer Prognosis Prediction Using Portrait Photos by StyleGAN Embedding
- Amr Hagag,
- Ahmed Gomaa,
- Dominik Kornek,
- Andreas Maier,
- Rainer Fietkau,
- Christoph Bert,
- Yixing Huang,
- Florian Putz
Medical Image Computing and Computer Assisted Intervention – MICCAI 2024Pages 198–208https://doi.org/10.1007/978-3-031-72086-4_19AbstractSurvival prediction for cancer patients is critical for optimal treatment selection and patient management. Current patient survival prediction methods typically extract survival information from patients’ clinical record data or biological and ...
- ArticleOctober 2024
DEPICT: Diffusion-Enabled Permutation Importance for Image Classification Tasks
AbstractWe propose a permutation-based explanation method for image classifiers. Current image-model explanations like activation maps are limited to instance-based explanations in the pixel space, making it difficult to understand global model behavior. ...
- research-articleSeptember 2024
Designing Digital Voting Systems for Citizens: Achieving Fairness and Legitimacy in Participatory Budgeting
- Joshua C. Yang,
- Carina I. Hausladen,
- Dominik Peters,
- Evangelos Pournaras,
- Regula Hnggli Fricker,
- Dirk Helbing
Digital Government: Research and Practice (DGOV), Volume 5, Issue 3Article No.: 26, Pages 1–30https://doi.org/10.1145/3665332Participatory Budgeting (PB) has evolved into a key democratic instrument for resource allocation in cities. Enabled by digital platforms, cities now have the opportunity to let citizens directly propose and vote on urban projects, using different voting ...
- ArticleSeptember 2024
Uncovering Patterns for Local Explanations in Outcome-Based Predictive Process Monitoring
- Andrei Buliga,
- Mozhgan Vazifehdoostirani,
- Laura Genga,
- Xixi Lu,
- Remco Dijkman,
- Chiara Di Francescomarino,
- Chiara Ghidini,
- Hajo A. Reijers
AbstractExplainable Predictive Process Monitoring aims at deriving explanations of the inner workings of black-box classifiers used to predict the continuation of ongoing process executions. Most existing techniques use data attributes (e.g., the loan ...
- research-articleSeptember 2024
You Can Only Verify When You Know the Answer: Feature-Based Explanations Reduce Overreliance on AI for Easy Decisions, but Not for Hard Ones
MuC '24: Proceedings of Mensch und Computer 2024Pages 156–170https://doi.org/10.1145/3670653.3670660Explaining the mechanisms behind model predictions is a common strategy in AI-assisted decision-making to help users rely appropriately on AI. However, recent research shows that the effectiveness of explanations depends on numerous factors, leading to ...
- research-articleSeptember 2024
Finding Regions of Counterfactual Explanations via Robust Optimization
INFORMS Journal on Computing (INFORMS-IJOC), Volume 36, Issue 5Pages 1316–1334https://doi.org/10.1287/ijoc.2023.0153Counterfactual explanations (CEs) play an important role in detecting bias and improving the explainability of data-driven classification models. A CE is a minimal perturbed data point for which the decision of the model changes. Most of the existing ...
- ArticleAugust 2024
Explainable Knowledge-Based Learning for Online Medical Question Answering
Knowledge Science, Engineering and ManagementPages 294–304https://doi.org/10.1007/978-981-97-5489-2_26AbstractThis study introduces an explainable AI framework for online medical Question Answering (QA) tasks, a growing need in Internet hospitals and digital healthcare services. In response to the increasing demand for automated yet accountable medical ...
- research-articleAugust 2024
Unpacking Human-AI interactions: From Interaction Primitives to a Design Space
ACM Transactions on Interactive Intelligent Systems (TIIS), Volume 14, Issue 3Article No.: 18, Pages 1–51https://doi.org/10.1145/3664522This article aims to develop a semi-formal representation for Human-AI (HAI) interactions, by building a set of interaction primitives which can specify the information exchanges between users and AI systems during their interaction. We show how these ...
- research-articleAugust 2024
Knowledge-graph-based explainable AI: A systematic review
Journal of Information Science (JIPP), Volume 50, Issue 4Pages 1019–1029https://doi.org/10.1177/01655515221112844In recent years, knowledge graphs (KGs) have been widely applied in various domains for different purposes. The semantic model of KGs can represent knowledge through a hierarchical structure based on classes of entities, their properties, and their ...