Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3292500.3332281acmconferencesArticle/Chapter ViewAbstractPublication PageskddConference Proceedingsconference-collections
tutorial

Explainable AI in Industry

Published: 25 July 2019 Publication History

Abstract

Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability. In addition, model explainability is a prerequisite for building trust and adoption of AI systems in high stakes domains requiring reliability and safety such as healthcare and automated transportation, and critical industrial applications with significant economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling.
As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale. The challenges for the research community include (i) defining model explainability, (ii) formulating explainability tasks for understanding model behavior and developing solutions for these tasks, and finally (iii) designing measures for evaluating the performance of models in explainability tasks.
In this tutorial, we will present an overview of model interpretability and explainability in AI, key regulations/laws, and techniques/tools for providing explainability as part of AI/ML systems. Then, we will focus on the application of explainability techniques in industry, wherein we present practical challenges/ guidelines for using explainability techniques effectively and lessons learned from deploying explainable models for several web-scale machine learning and data mining applications. We will present case studies across different companies, spanning application domains such as search and recommendation systems, sales, lending, and fraud detection. Finally, based on our experiences in industry, we will identify open problems and research directions for the data mining/machine learning community.

Supplementary Material

Part 1 of 2 (p3203-gade_part1.mp4)
Part 2 of 2 (p3203-gade_part2.mp4)

References

[1]
M. A. Ahmad, C. Eckert, and A. Teredesai. Interpretable machine learning in healthcare. In ACM International Conference on Bioinformatics, Computational Biology, and Health Informatics, 2018.
[2]
L. Antwarg, B. Shapira, and L. Rokach. Explaining anomalies detected by autoencoders using SHAP. arXiv preprint arXiv:1903.02407, 2019.
[3]
R. Caruana, Y. Lou, J. Gehrke, P. Koch, M. Sturm, and N. Elhadad. Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In KDD, 2015.
[4]
F. K. Dovs ilović, M. Brvc ić, and N. Hlupić. Explainable artificial intelligence: A survey. In IEEE International convention on information and communication technology, electronics and microelectronics (MIPRO), 2018.
[5]
R. M. Grath, L. Costabello, C. L. Van, P. Sweeney, F. Kamiab, Z. Shen, and F. Lecue. Interpretable credit application predictions with counterfactual explanations. arXiv preprint arXiv:1811.05245, 2018.
[6]
D. Gunning. Explainable artificial intelligence (xai). Defense Advanced Research Projects Agency (DARPA), nd Web, 2017.
[7]
H. Lakkaraju, E. Kamar, R. Caruana, and E. Horvitz. Discovering unknown unknowns of predictive models. In NeurIPS, 2016.
[8]
H. Lakkaraju, E. Kamar, R. Caruana, and J. Leskovec. Interpretable & explorable approximations of black box models. In Workshop on Fairness, Accountability, and Transparency in Machine Learning (FAT-ML), 2017.
[9]
Z. C. Lipton. The mythos of model interpretability. Communications of the ACM, 61(10), 2018.
[10]
Y. Lou, R. Caruana, and J. Gehrke. Intelligible models for classification and regression. In KDD, 2012.
[11]
Y. Lou, R. Caruana, J. Gehrke, and G. Hooker. Accurate intelligible models with pairwise interactions. In KDD, 2013.
[12]
S. M. Lundberg, G. Erion, H. Chen, A. DeGrave, J. M. Prutkin, B. Nair, R. Katz, J. Himmelfarb, N. Bansal, and S.-I. Lee. Explainable AI for trees: From local explanations to global understanding. arXiv preprint arXiv:1905.04610, 2019.
[13]
D. A. Melis and T. Jaakkola. Towards robust interpretability with self-explaining neural networks. In NeurIPS, 2018.
[14]
D. Qiu and Y. Qian. Relevance debugging and explaining at LinkedIn. In OpML, 2019.
[15]
M. T. Ribeiro, S. Singh, and C. Guestrin. Model-agnostic interpretability of machine learning. In ICML Workshop on Human Interpretability in Machine Learning (WHI), 2016.
[16]
M. T. Ribeiro, S. Singh, and C. Guestrin. “Why should I trust you?” Explaining the predictions of any classifier. In KDD, 2016.
[17]
M. T. Ribeiro, S. Singh, and C. Guestrin. Anchors: High-precision model-agnostic explanations. In AAAI, 2018.
[18]
M. Sundararajan, A. Taly, and Q. Yan. Axiomatic attribution for deep networks. In ICML, 2017.
[19]
S. Tan, R. Caruana, G. Hooker, P. Koch, and A. Gordo. Learning global additive explanations for neural nets using model distillation. arXiv preprint arXiv:1801.08640, 2018.
[20]
D. Wang, Q. Yang, A. Abdul, and B. Y. Lim. Designing theory-driven user-centric explainable AI. In CHI, 2019.

Cited By

View all
  • (2024)Human-AI Collaboration in Industry 5Human-Machine Collaboration and Emotional Intelligence in Industry 5.010.4018/979-8-3693-6806-0.ch003(44-70)Online publication date: 30-Jun-2024
  • (2024)Challenges and Future Prospects of Intelligent Decision MakingBio-Inspired Intelligence for Smart Decision-Making10.4018/979-8-3693-5276-2.ch011(179-188)Online publication date: 12-Apr-2024
  • (2024)Interpretability of Causal Discovery in Tracking Deterioration in a Highly Dynamic ProcessSensors10.3390/s2412372824:12(3728)Online publication date: 8-Jun-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
KDD '19: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining
July 2019
3305 pages
ISBN:9781450362016
DOI:10.1145/3292500
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 25 July 2019

Check for updates

Author Tags

  1. explainable ai
  2. industry case studies
  3. ml model transparency and interpretability

Qualifiers

  • Tutorial

Conference

KDD '19
Sponsor:

Acceptance Rates

KDD '19 Paper Acceptance Rate 110 of 1,200 submissions, 9%;
Overall Acceptance Rate 1,133 of 8,635 submissions, 13%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)888
  • Downloads (Last 6 weeks)91
Reflects downloads up to 09 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Human-AI Collaboration in Industry 5Human-Machine Collaboration and Emotional Intelligence in Industry 5.010.4018/979-8-3693-6806-0.ch003(44-70)Online publication date: 30-Jun-2024
  • (2024)Challenges and Future Prospects of Intelligent Decision MakingBio-Inspired Intelligence for Smart Decision-Making10.4018/979-8-3693-5276-2.ch011(179-188)Online publication date: 12-Apr-2024
  • (2024)Interpretability of Causal Discovery in Tracking Deterioration in a Highly Dynamic ProcessSensors10.3390/s2412372824:12(3728)Online publication date: 8-Jun-2024
  • (2024)XAI-Based Clinical Decision Support Systems: A Systematic ReviewApplied Sciences10.3390/app1415663814:15(6638)Online publication date: 30-Jul-2024
  • (2024)Estimating Future Financial Development of Urban Areas for Deploying Bank Branches: A Local-Regional Interpretable ModelACM Transactions on Management Information Systems10.1145/365647915:2(1-26)Online publication date: 8-Apr-2024
  • (2024)Leveraging Large Language Models to Enhance Domain Expert Inclusion in Data Science WorkflowsExtended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3651115(1-11)Online publication date: 11-May-2024
  • (2024)Deep Neural Networks and Tabular Data: A SurveyIEEE Transactions on Neural Networks and Learning Systems10.1109/TNNLS.2022.322916135:6(7499-7519)Online publication date: Jun-2024
  • (2024)TIF: Threshold Interception and Fusion for Compact and Fine-Grained Visual AttributionIEEE Transactions on Multimedia10.1109/TMM.2023.332480826(4575-4589)Online publication date: 1-Jan-2024
  • (2024)Explainable AI: A Diverse Stakeholder Perspective2024 IEEE 32nd International Requirements Engineering Conference (RE)10.1109/RE59067.2024.00060(494-495)Online publication date: 24-Jun-2024
  • (2024)Accessible Autonomous Vehicles as Symbiotic Autonomous Systems for Users with Disabilities: Preliminary Design Guidelines2024 IEEE 4th International Conference on Human-Machine Systems (ICHMS)10.1109/ICHMS59971.2024.10555877(1-5)Online publication date: 15-May-2024
  • Show More Cited By

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media