default search action
xAI 2023: Lisbon, Portugal
- Luca Longo:
Explainable Artificial Intelligence - First World Conference, xAI 2023, Lisbon, Portugal, July 26-28, 2023, Proceedings, Part I. Communications in Computer and Information Science 1901, Springer 2023, ISBN 978-3-031-44063-2
Interdisciplinary Perspectives, Approaches and Strategies for xAI
- Deborah Baum, Kevin Baum, Timo P. Gros, Verena Wolf:
XAI Requirements in Smart Production Processes: A Case Study. 3-24 - Francesco Sovrano, Fabio Vitali:
Perlocution vs Illocution: How Different Interpretations of the Act of Explaining Impact on the Evaluation of Explanations and XAI. 25-47 - Timo Freiesleben, Gunnar König:
Dear XAI Community, We Need to Talk! - Fundamental Misconceptions in Current XAI Research. 48-65 - Jakob Mannmeusel, Mario Rothfelder, Samaneh Khoshrou:
Speeding Things Up. Can Explainability Improve Human Learning? 66-84 - Labhaoise NíFhaoláin, Andrew Hines, Vivek Nallur:
Statutory Professions in AI Governance and Their Consequences for Explainable AI. 85-96 - Valentina Ghidini:
The Xi Method: Unlocking the Mysteries of Regression with Statistics. 97-114 - Minal Suresh Patil, Kary Främling:
Do Intermediate Feature Coalitions Aid Explainability of Black-Box Models? 115-130 - Kristin Blesch, Marvin N. Wright, David S. Watson:
Unfooling SHAP and SAGE: Knockoff Imputation for Shapley Values. 131-146 - Andrea Apicella, Luca Di Lorenzo, Francesco Isgrò, Andrea Pollastro, Roberto Prevete:
Strategies to Exploit XAI to Improve Classification Systems. 147-159 - Ettore Mariotti, Adarsa Sivaprasad, Jose Maria Alonso-Moral:
Beyond Prediction Similarity: ShapGAP for Evaluating Faithful Surrogate Models in XAI. 160-173
Model-Agnostic Explanations, Methods and Techniques for xAI, Causality and Explainable AI
- Maximilian Muschalik, Fabian Fumagalli, Rohit Jagtani, Barbara Hammer, Eyke Hüllermeier:
iPDP: On Partial Dependence Plots in Dynamic Modeling Scenarios. 177-194 - Fatima Ezzeddine, Omran Ayoub, Davide Andreoletti, Silvia Giordano:
SAC-FACT: Soft Actor-Critic Reinforcement Learning for Counterfactual Explanations. 195-216 - Christian A. Scholbeck, Henri Funk, Giuseppe Casalicchio:
Algorithm-Agnostic Feature Attributions for Clustering. 217-240 - Kary Främling:
Feature Importance versus Feature Influence and What It Signifies for Explainable AI. 241-259 - Dimitry Mindlin, Malte Schilling, Philipp Cimiano:
ABC-GAN: Spatially Constrained Counterfactual Generation for Image Classification Explanations. 260-282 - Isacco Beretta, Martina Cinquini:
The Importance of Time in Causal Algorithmic Recourse. 283-298 - Marcel Robeer, Floris Bex, Ad Feelders, Henry Prakken:
Explaining Model Behavior with Global Causal Analysis. 299-323 - Carlo Abrate, Giulia Preti, Francesco Bonchi:
Counterfactual Explanations for Graph Classification Through the Lenses of Density. 324-348 - Justus Sagemüller, Olivier Verdier:
Ablation Path Saliency. 349-372 - Pedro Sequeira, Melinda T. Gervasio:
IxDRL: A Novel Explainable Deep Reinforcement Learning Toolkit Based on Analyses of Interestingness. 373-396 - Meike Nauta, Christin Seifert:
The Co-12 Recipe for Evaluating Interpretable Part-Prototype Image Classifiers. 397-420 - Laura State, Salvatore Ruggieri, Franco Turini:
Reason to Explain: Interactive Contrastive Explanations (REASONX). 421-437 - Deepan Chakravarthi Padmanabhan, Paul G. Plöger, Octavio Arriaga, Matias Valdenegro-Toro:
Sanity Checks for Saliency Methods Explaining Object Detectors. 438-455 - Christoph Molnar, Timo Freiesleben, Gunnar König, Julia Herbinger, Tim Reisinger, Giuseppe Casalicchio, Marvin N. Wright, Bernd Bischl:
Relating the Partial Dependence Plot and Permutation Feature Importance to the Data Generating Process. 456-479
Explainable AI in Finance, Cybersecurity, Health-Care and Biomedicine
- Julian Tritscher, Maximilian Wolf, Andreas Hotho, Daniel Schlör:
Evaluating Feature Relevance XAI in Network Intrusion Detection. 483-497 - Jean Dessain, Nora Bentaleb, Fabien Vinas:
Cost of Explainability in AI: An Example with Credit Scoring Models. 498-516 - Paolo Giudici, Emanuela Raffinetti:
Lorenz Zonoids for Trustworthy AI. 517-530 - Maria Carla Calzarossa, Paolo Giudici, Rasha Zieni:
Explainable Machine Learning for Bag of Words-Based Phishing Detection. 531-543 - Avleen Malhi, Kary Främling:
An Evaluation of Contextual Importance and Utility for Outcome Explanation of Black-Box Predictions for Medical Datasets. 544-557 - Lisa Anita De Santi, Filippo Bargagna, Maria Filomena Santarelli, Vincenzo Positano:
Evaluating Explanations of an Alzheimer's Disease 18F-FDG Brain PET Black-Box Classifier. 558-581 - Sarah Holm, Luís Macedo:
The Accuracy and Faithfullness of AL-DLIME - Active Learning-Based Deterministic Local Interpretable Model-Agnostic Explanations: A Comparison with LIME and DLIME in Medicine. 582-605 - Avleen Malhi, Vlad Apopei, Kary Främling:
Understanding Unsupervised Learning Explanations Using Contextual Importance and Utility. 606-617 - Chiara Natali, Lorenzo Famiglini, Andrea Campagner, Giovanni Andrea La Maida, Enrico Gallazzi, Federico Cabitza:
Color Shadows 2: Assessing the Impact of XAI on Diagnostic Decision-Making. 618-629 - José Luis Corcuera Bárcena, Pietro Ducange, Francesco Marcelloni, Alessandro Renda, Fabrizio Ruffini:
Federated Learning of Explainable Artificial Intelligence Models for Predicting Parkinson's Disease Progression. 630-648 - Jingyu Hu, Yizhu Liang, Weiyu Zhao, Kevin McAreavey, Weiru Liu:
An Interactive XAI Interface with Application in Healthcare for Non-experts. 649-670 - Oleksandr Davydko, Vladimir Pavlov, Luca Longo:
Selecting Textural Characteristics of Chest X-Rays for Pneumonia Lesions Classification with the Integrated Gradients XAI Attribution Method. 671-687
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.