Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- ArticleSeptember 2024
Uncovering Patterns for Local Explanations in Outcome-Based Predictive Process Monitoring
- Andrei Buliga,
- Mozhgan Vazifehdoostirani,
- Laura Genga,
- Xixi Lu,
- Remco Dijkman,
- Chiara Di Francescomarino,
- Chiara Ghidini,
- Hajo A. Reijers
AbstractExplainable Predictive Process Monitoring aims at deriving explanations of the inner workings of black-box classifiers used to predict the continuation of ongoing process executions. Most existing techniques use data attributes (e.g., the loan ...
Unraveling the Dilemma of AI Errors: Exploring the Effectiveness of Human and Machine Explanations for Large Language Models
CHI '24: Proceedings of the 2024 CHI Conference on Human Factors in Computing SystemsArticle No.: 839, Pages 1–20https://doi.org/10.1145/3613904.3642934The field of eXplainable artificial intelligence (XAI) has produced a plethora of methods (e.g., saliency-maps) to gain insight into artificial intelligence (AI) models, and has exploded with the rise of deep learning (DL). However, human-participant ...
- research-articleMarch 2024
Globally-consistent rule-based summary-explanations for machine learning models: application to credit-risk evaluation
The Journal of Machine Learning Research (JMLR), Volume 24, Issue 1Article No.: 16, Pages 533–576We develop a method for understanding specific predictions made by (global) predictive models by constructing (local) models tailored to each specific observation (these are also called "explanations" in the literature). Unlike existing work that "...
- research-articleJuly 2019
Improving the Quality of Explanations with Local Embedding Perturbations
KDD '19: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data MiningPages 875–884https://doi.org/10.1145/3292500.3330930Classifier explanations have been identified as a crucial component of knowledge discovery. Local explanations evaluate the behavior of a classifier in the vicinity of a given instance. A key step in this approach is to generate synthetic neighbors of ...
- short-paperJuly 2019
LIRME: Locally Interpretable Ranking Model Explanation
SIGIR'19: Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information RetrievalPages 1281–1284https://doi.org/10.1145/3331184.3331377Information retrieval (IR) models often employ complex variations in term weights to compute an aggregated similarity score of a query-document pair. Treating IR models as black-boxes makes it difficult to understand or explain why certain documents are ...