Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3430984.3431015acmotherconferencesArticle/Chapter ViewAbstractPublication PagescodsConference Proceedingsconference-collections
research-article

Reliable Counterfactual Explanations for Autoencoder based Anomalies

Published: 02 January 2021 Publication History

Abstract

Autoencoders have been used successfully for tackling the problem of anomaly detection in an unsupervised setting, and are often known to give better results than traditional approaches such as clustering and subspace-based (linear) methods. A data point is flagged as anomalous by an autoencoder if its reconstruction loss is higher than an appropriate threshold. However, as with other deep learning models, the increased accuracy offered by autoencoders comes at the cost of interpretability. Explaining an autoencoder’s decision to flag a particular data point as an anomaly is greatly important, since a human-friendly explanation would be necessary for a domain expert tasked with evaluating the model’s decisions. We consider the problem of finding counterfactual explanations for autoencoder anomalies, which address the question of what needs to be minimally changed in a given anomalous data point to make it non-anomalous. We present an algorithm that generates a diverse set of proximate counterfactual explanations for a given autoencoder anomaly. We also introduce the notion of reliability of a counterfactual, and present techniques to find reliable counterfactual explanations.

References

[1]
Liat Antwarg, Bracha Shapira, and Lior Rokach. 2019. Explaining Anomalies Detected by Autoencoders Using SHAP. CoRR abs/1903.02407(2019), 5. arxiv:1903.02407http://arxiv.org/abs/1903.02407
[2]
Anirban Chakraborty, Manaar Alam, Vishal Dey, Anupam Chattopadhyay, and Debdeep Mukhopadhyay. 2018. Adversarial Attacks and Defences: A Survey. CoRR abs/1810.00069(2018), 31. arxiv:1810.00069http://arxiv.org/abs/1810.00069
[3]
Alberto Costa and Giacomo Nannicini. 2018. RBFOpt: an open-source library for black-box optimization with costly function evaluations. Mathematical Programming Computation 10 (08 2018), 33. https://doi.org/10.1007/s12532-018-0144-7
[4]
Ian Davidson. 2007. Anomaly Detection, Explanation and Visualization. https://web.cs.ucdavis.edu/~davidson/Publications/anomalyDetectionExternal.pdf. SGI. Tech. Rep.
[5]
Dheeru Dua and Casey Graff. 2017. UCI Machine Learning Repository. http://archive.ics.uci.edu/ml. University of California, Irvine, School of Information and Computer Sciences.
[6]
Ian Goodfellow, Yoshua Bengio, and Aaron Courville. 2016. Deep Learning. MIT Press, USA. http://www.deeplearningbook.org.
[7]
Amir-Hossein Karimi, Gilles Barthe, Borja Balle, and Isabel Valera. 2019. Model-Agnostic Counterfactual Explanations for Consequential Decisions. CoRR abs/1905.11190(2019), 14. arxiv:1905.11190http://arxiv.org/abs/1905.11190
[8]
Scott M Lundberg and Su-In Lee. 2017. A Unified Approach to Interpreting Model Predictions. In Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett(Eds.). Curran Associates, Inc., USA, 4765–4774. http://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions.pdf
[9]
S. Moro, P. Cortez, and P. Rita. 2014. A Data-Driven Approach to Predict the Success of Bank Telemarketing. Decision Support Systems 62 (Jun 2014), 22–31.
[10]
Ramaravind Kommiya Mothilal, Amit Sharma, and Chenhao Tan. 2020. Explaining Machine Learning Classifiers through Diverse Counterfactual Explanations. In Proceedings of the ACM Conference on Fairness, Accountability, and Transparency.ACM, New York, USA, 607–617.
[11]
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In Advances in Neural Information Processing Systems 32. Curran Associates, Inc., USA, 8024–8035.
[12]
Shebuti Rayana. 2016. Outlier Detection Datasets (ODDS) Library. http://odds.cs.stonybrook.edu/. Stony Brook University, Department of Computer Science.
[13]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. ”Why Should I Trust You?”: Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, New York, USA, 1135–1144.
[14]
D. E. Rumelhart, G. E. Hinton, and R. J. Williams. 1986. Learning Internal Representations by Error Propagation. MIT Press, Cambridge, MA, USA, 318–362.
[15]
Chris Russell. 2019. Efficient Search for Diverse Coherent Explanations. In Proceedings of the Conference on Fairness, Accountability, and Transparency (Atlanta, GA, USA) (FAT* ’19). Association for Computing Machinery, New York, NY, USA, 20–28. https://doi.org/10.1145/3287560.3287569
[16]
Mayu Sakurada and Takehisa Yairi. 2014. Anomaly Detection Using Autoencoders with Nonlinear Dimensionality Reduction. In Proceedings of the MLSDA 2014 2nd Workshop on Machine Learning for Sensory Data Analysis (Gold Coast, Australia QLD, Australia) (MLSDA’14). Association for Computing Machinery, New York, NY, USA, 4–11. https://doi.org/10.1145/2689746.2689747
[17]
Md Amran Siddiqui, Alan Fern, Thomas G. Dietterich, and Weng-Keen Wong. 2019. Sequential Feature Explanations for Anomaly Detection. ACM Trans. Knowl. Discov. Data 13, 1, Article 1 (Jan. 2019), 22 pages. https://doi.org/10.1145/3230666
[18]
N. Takeishi. 2019. Shapley Values of Reconstruction Errors of PCA for Explaining Anomaly Detection. In 2019 International Conference on Data Mining Workshops (ICDMW). IEEE, USA, 793–798.
[19]
Berk Ustun, Alexander Spangher, and Yang Liu. 2019. Actionable Recourse in Linear Classification. In Proceedings of the Conference on Fairness, Accountability, and Transparency (Atlanta, GA, USA) (FAT* ’19). Association for Computing Machinery, New York, NY, USA, 10–19. https://doi.org/10.1145/3287560.3287566
[20]
Sandra Wachter, Brent D. Mittelstadt, and Chris Russell. 2017. Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR. CoRR abs/1711.00399(2017), 52. arxiv:1711.00399http://arxiv.org/abs/1711.00399
[21]
Chong Zhou and Randy C. Paffenroth. 2017. Anomaly Detection with Robust Deep Autoencoders. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (Halifax, NS, Canada) (KDD ’17). Association for Computing Machinery, New York, NY, USA, 665–674. https://doi.org/10.1145/3097983.3098052

Cited By

View all
  • (2024)Evaluating Anomaly Explanations Using Ground TruthAI10.3390/ai50401175:4(2375-2392)Online publication date: 15-Nov-2024
  • (2024)Counterfactual Explanations and Algorithmic Recourses for Machine Learning: A ReviewACM Computing Surveys10.1145/367711956:12(1-42)Online publication date: 9-Jul-2024
  • (2024)Drawing Attributions From Evolved CounterfactualsProceedings of the Genetic and Evolutionary Computation Conference Companion10.1145/3638530.3664122(1582-1589)Online publication date: 14-Jul-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Other conferences
CODS-COMAD '21: Proceedings of the 3rd ACM India Joint International Conference on Data Science & Management of Data (8th ACM IKDD CODS & 26th COMAD)
January 2021
453 pages
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 02 January 2021

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. anomaly detection
  2. autoencoders
  3. counterfactual explanations
  4. machine learning

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

CODS COMAD 2021
CODS COMAD 2021: 8th ACM IKDD CODS and 26th COMAD
January 2 - 4, 2021
Bangalore, India

Acceptance Rates

Overall Acceptance Rate 197 of 680 submissions, 29%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)94
  • Downloads (Last 6 weeks)10
Reflects downloads up to 12 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Evaluating Anomaly Explanations Using Ground TruthAI10.3390/ai50401175:4(2375-2392)Online publication date: 15-Nov-2024
  • (2024)Counterfactual Explanations and Algorithmic Recourses for Machine Learning: A ReviewACM Computing Surveys10.1145/367711956:12(1-42)Online publication date: 9-Jul-2024
  • (2024)Drawing Attributions From Evolved CounterfactualsProceedings of the Genetic and Evolutionary Computation Conference Companion10.1145/3638530.3664122(1582-1589)Online publication date: 14-Jul-2024
  • (2023)Towards Understanding Alerts raised by Unsupervised Network Intrusion Detection SystemsProceedings of the 26th International Symposium on Research in Attacks, Intrusions and Defenses10.1145/3607199.3607247(135-150)Online publication date: 16-Oct-2023
  • (2023)Explainable AI: To Reveal the Logic of Black-Box ModelsNew Generation Computing10.1007/s00354-022-00201-242:1(53-87)Online publication date: 1-Feb-2023
  • (2022)Framing Algorithmic Recourse for Anomaly DetectionProceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining10.1145/3534678.3539344(283-293)Online publication date: 14-Aug-2022
  • (2022)PANDA: Human-in-the-Loop Anomaly Detection and ExplanationInformation Processing and Management of Uncertainty in Knowledge-Based Systems10.1007/978-3-031-08974-9_57(720-732)Online publication date: 4-Jul-2022
  • (2022)A Classification of Anomaly Explanation MethodsMachine Learning and Principles and Practice of Knowledge Discovery in Databases10.1007/978-3-030-93736-2_3(26-33)Online publication date: 17-Feb-2022
  • (2021)Anomaly explanation: A reviewData & Knowledge Engineering10.1016/j.datak.2021.101946(101946)Online publication date: Nov-2021

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media