Nothing Special   »   [go: up one dir, main page]

Skip to main content

A Multi-layered Approach for Tailored Black-Box Explanations

  • Conference paper
  • First Online:
Pattern Recognition. ICPR International Workshops and Challenges (ICPR 2021)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12663))

Included in the following conference series:

Abstract

Explanations for algorithmic decision systems can take different forms, they can target different types of users with different goals. One of the main challenges in this area is therefore to devise explanation methods that can accommodate this variety of situations. A first step to address this challenge is to allow explainees to express their needs in the most convenient way, depending on their level of expertise and motivation. In this paper, we present a solution to this problem based on a multi-layered approach allowing users to express their requests for explanations at different levels of abstraction. We illustrate the approach with the application of a proof-of-concept system called IBEX to two case studies.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    With the associated learning data set, if available.

  2. 2.

    Other taxonomies of explainees’ profiles have already been proposed, in particular in [4] and [5]. Our contribution is consistent with them, but involves some simplifications, justified by pragmatic needs.

  3. 3.

    In addition to the ADS, as defined in the context.

  4. 4.

    In the current version of IBEX, threshold \(T_1\) is set to 10 and \(T_2\) is set to 50.

  5. 5.

    https://gitlab.inria.fr/chenin/ibex.

  6. 6.

    https://archive.ics.uci.edu/ml/datasets/Adult.

  7. 7.

    https://archive.ics.uci.edu/ml/datasets/statlog+(german+credit+data).

References

  1. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. (CSUR) 51(5), 1–42 (2018). Article no. 93

    Google Scholar 

  2. Miller, T., Howe, P., Sonenberg, L.: Explainable AI: beware of inmates running the asylum. In: IJCAI 2017 Workshop on Explainable AI (XAI), vol. 36 (2017)

    Google Scholar 

  3. Henin, C., Le Métayer, D.: Towards a generic framework for black-box explanations of algorithmic decision systems (Extended Version). Inria Research Report 9276. https://hal.inria.fr/hal-02131174

  4. Tomsett, R., Braines, D., Harborne, D., Preece, A.D., Chakraborty, S.: Interpretable to whom? A role-based model for analyzing interpretable machine learning systems. CoRR abs/1806.07552 (2018)

    Google Scholar 

  5. Arrieta, A.B., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. arXiv:1910.10045 [cs]arXiv: 1910.10045

  6. Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267. https://doi.org/10.1016/j.artint.2018.07.007

  7. Weller, A.: Challenges for transparency. arXiv:1708.01870 [cs]arXiv: 1708.01870

  8. Lipton, Z.C.: The mythos of model interpretability. arXiv:1606.03490 [cs, stat]arXiv: 1606.03490

  9. Wachter, S., Mittelstadt, B., Russell, C.: Counterfactual explanations without opening the black box: automated decisions and the GDPR. Harvard J. Law Technol. 31, 841–887 (2018)

    Google Scholar 

  10. Stumpf, S., et al.: Toward harnessing user feedback for machine learning 10 (2007)

    Google Scholar 

  11. Lim, B.Y., Dey, A.K., Avrahami, D.: Why and why not explanations improve the intelligibility of context-aware intelligent systems. In: Proceedings of the 27th International Conference on Human Factors in Computing Systems - CHI 2009, p. 2119. ACM Press (2009). https://doi.org/10.1145/1518701.1519023

  12. Lakkaraju, H., Kamar, E., Caruana, R., Leskovec, J.: Interpretable & explorable approximations of black box models. arXiv preprint arXiv:1707.01154

  13. Ribeiro, M.T., Singh, S., Guestrin, C.: Anchors: high-precision model-agnostic explanations. In: AAAI Conference on Artificial Intelligence (2018)

    Google Scholar 

  14. Laugel, T., Lesot, M.-J., Marsala, C., Renard, X., Detyniecki, M.: The dangers of post-hoc interpretability: unjustified counterfactual explanations. arXiv:1907.09294 [cs, stat]arXiv: 1907.09294

  15. Henin, C., Le Métayer, D.: A multi-layered approach for interactive black-box explanations. Inria Research Report 9331. https://hal.inria.fr/hal-02498418

  16. Doshi-Velez, F., Kim, B.: Towards a rigorous science of interpretable machine learning. arXiv e-prints arXiv:1702.08608 (2017)

  17. Henin, C., Le Métayer, D.: A generic framework for black-box explanations. In: Proceedings of the International Workshop on Fair and Interpretable Learning Algorithms (FILA 2020). IEEE (2020)

    Google Scholar 

  18. Ras, G., van Gerven, M., Haselager, P. Explanation methods in deep learning: users, values, concerns and challenges. CoRR abs/1803.07517. http://arxiv.org/abs/1803.07517

  19. Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018). https://doi.org/10.1109/ACCESS.2018.2870052

    Article  Google Scholar 

  20. Wolf, C.T.: Explainability scenarios: towards scenario-based XAI design. In: Proceedings of the 24th International Conference on Intelligent User Interfaces - IUI 2019, pp. 252–257. ACM Press (2019). https://doi.org/10.1145/3301275.3302317

  21. Poursabzi-Sangdeh, F., Goldstein, D.G., Hofman, J.M., Vaughan, J.W., Wallach, H.: Manipulating and measuring model interpretability. arXiv:1802.07810 [cs]arXiv: 1802.07810

  22. Hall, M., et al.: A systematic method to understand requirements for explainable AI (XAI) systems 7 (2019)

    Google Scholar 

  23. Arya, V., et al.: One explanation does not fit all: a toolkit and taxonomy of AI explainability techniques. arXiv:1909.03012 [cs, stat]arXiv: 1909.03012

  24. Sokol, K., Flach, P.: One explanation does not fit all: the promise of interactive explanations for machine learning transparency. KI - Künstliche Intelligenz. http://dx.doi.org/10.1007/s13218-020-00637-y

  25. Walton, D.: A dialogue system specification for explanation. Synthese 182(3), 349–374 (2011). https://doi.org/10.1007/s11229-010-9745-z

    Article  Google Scholar 

  26. Madumal, P., Miller, T., Sonenberg, L., Vetere, F.: A grounded interaction protocol for explainable artificial intelligence. arXiv:1903.02409 [cs]arXiv: 1903.02409

  27. Nori, H., Jenkins, S., Koch, P., Caruana, R.: InterpretML: a unified framework for machine learning interpretability. arXiv preprint arXiv:1909.09223

  28. Klaise, J., Van Looveren, A., Vacanti, G., Coca, A.: Alibi: Algorithms for monitoring and explaining machine learning models (2020)

    Google Scholar 

  29. Biecek, P.: DALEX: explainers for complex predictive models in R. J. Mach. Learn. Res. 19(84), 1–5 (2018)

    MATH  Google Scholar 

  30. Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should I trust you?” Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)

    Google Scholar 

  31. Dhurandhar, A., Iyengar, V., Luss, R., Shanmugam, K.: A formal framework to characterize interpretability of procedures. arXiv:1707.03886 [cs]arXiv: 1707.03886

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Clément Henin .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Henin, C., Le Métayer, D. (2021). A Multi-layered Approach for Tailored Black-Box Explanations. In: Del Bimbo, A., et al. Pattern Recognition. ICPR International Workshops and Challenges. ICPR 2021. Lecture Notes in Computer Science(), vol 12663. Springer, Cham. https://doi.org/10.1007/978-3-030-68796-0_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-68796-0_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-68795-3

  • Online ISBN: 978-3-030-68796-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics