Abstract
Explanations for algorithmic decision systems can take different forms, they can target different types of users with different goals. One of the main challenges in this area is therefore to devise explanation methods that can accommodate this variety of situations. A first step to address this challenge is to allow explainees to express their needs in the most convenient way, depending on their level of expertise and motivation. In this paper, we present a solution to this problem based on a multi-layered approach allowing users to express their requests for explanations at different levels of abstraction. We illustrate the approach with the application of a proof-of-concept system called IBEX to two case studies.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
With the associated learning data set, if available.
- 2.
- 3.
In addition to the ADS, as defined in the context.
- 4.
In the current version of IBEX, threshold \(T_1\) is set to 10 and \(T_2\) is set to 50.
- 5.
- 6.
- 7.
References
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. (CSUR) 51(5), 1–42 (2018). Article no. 93
Miller, T., Howe, P., Sonenberg, L.: Explainable AI: beware of inmates running the asylum. In: IJCAI 2017 Workshop on Explainable AI (XAI), vol. 36 (2017)
Henin, C., Le Métayer, D.: Towards a generic framework for black-box explanations of algorithmic decision systems (Extended Version). Inria Research Report 9276. https://hal.inria.fr/hal-02131174
Tomsett, R., Braines, D., Harborne, D., Preece, A.D., Chakraborty, S.: Interpretable to whom? A role-based model for analyzing interpretable machine learning systems. CoRR abs/1806.07552 (2018)
Arrieta, A.B., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. arXiv:1910.10045 [cs]arXiv: 1910.10045
Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267. https://doi.org/10.1016/j.artint.2018.07.007
Weller, A.: Challenges for transparency. arXiv:1708.01870 [cs]arXiv: 1708.01870
Lipton, Z.C.: The mythos of model interpretability. arXiv:1606.03490 [cs, stat]arXiv: 1606.03490
Wachter, S., Mittelstadt, B., Russell, C.: Counterfactual explanations without opening the black box: automated decisions and the GDPR. Harvard J. Law Technol. 31, 841–887 (2018)
Stumpf, S., et al.: Toward harnessing user feedback for machine learning 10 (2007)
Lim, B.Y., Dey, A.K., Avrahami, D.: Why and why not explanations improve the intelligibility of context-aware intelligent systems. In: Proceedings of the 27th International Conference on Human Factors in Computing Systems - CHI 2009, p. 2119. ACM Press (2009). https://doi.org/10.1145/1518701.1519023
Lakkaraju, H., Kamar, E., Caruana, R., Leskovec, J.: Interpretable & explorable approximations of black box models. arXiv preprint arXiv:1707.01154
Ribeiro, M.T., Singh, S., Guestrin, C.: Anchors: high-precision model-agnostic explanations. In: AAAI Conference on Artificial Intelligence (2018)
Laugel, T., Lesot, M.-J., Marsala, C., Renard, X., Detyniecki, M.: The dangers of post-hoc interpretability: unjustified counterfactual explanations. arXiv:1907.09294 [cs, stat]arXiv: 1907.09294
Henin, C., Le Métayer, D.: A multi-layered approach for interactive black-box explanations. Inria Research Report 9331. https://hal.inria.fr/hal-02498418
Doshi-Velez, F., Kim, B.: Towards a rigorous science of interpretable machine learning. arXiv e-prints arXiv:1702.08608 (2017)
Henin, C., Le Métayer, D.: A generic framework for black-box explanations. In: Proceedings of the International Workshop on Fair and Interpretable Learning Algorithms (FILA 2020). IEEE (2020)
Ras, G., van Gerven, M., Haselager, P. Explanation methods in deep learning: users, values, concerns and challenges. CoRR abs/1803.07517. http://arxiv.org/abs/1803.07517
Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018). https://doi.org/10.1109/ACCESS.2018.2870052
Wolf, C.T.: Explainability scenarios: towards scenario-based XAI design. In: Proceedings of the 24th International Conference on Intelligent User Interfaces - IUI 2019, pp. 252–257. ACM Press (2019). https://doi.org/10.1145/3301275.3302317
Poursabzi-Sangdeh, F., Goldstein, D.G., Hofman, J.M., Vaughan, J.W., Wallach, H.: Manipulating and measuring model interpretability. arXiv:1802.07810 [cs]arXiv: 1802.07810
Hall, M., et al.: A systematic method to understand requirements for explainable AI (XAI) systems 7 (2019)
Arya, V., et al.: One explanation does not fit all: a toolkit and taxonomy of AI explainability techniques. arXiv:1909.03012 [cs, stat]arXiv: 1909.03012
Sokol, K., Flach, P.: One explanation does not fit all: the promise of interactive explanations for machine learning transparency. KI - Künstliche Intelligenz. http://dx.doi.org/10.1007/s13218-020-00637-y
Walton, D.: A dialogue system specification for explanation. Synthese 182(3), 349–374 (2011). https://doi.org/10.1007/s11229-010-9745-z
Madumal, P., Miller, T., Sonenberg, L., Vetere, F.: A grounded interaction protocol for explainable artificial intelligence. arXiv:1903.02409 [cs]arXiv: 1903.02409
Nori, H., Jenkins, S., Koch, P., Caruana, R.: InterpretML: a unified framework for machine learning interpretability. arXiv preprint arXiv:1909.09223
Klaise, J., Van Looveren, A., Vacanti, G., Coca, A.: Alibi: Algorithms for monitoring and explaining machine learning models (2020)
Biecek, P.: DALEX: explainers for complex predictive models in R. J. Mach. Learn. Res. 19(84), 1–5 (2018)
Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should I trust you?” Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)
Dhurandhar, A., Iyengar, V., Luss, R., Shanmugam, K.: A formal framework to characterize interpretability of procedures. arXiv:1707.03886 [cs]arXiv: 1707.03886
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Henin, C., Le Métayer, D. (2021). A Multi-layered Approach for Tailored Black-Box Explanations. In: Del Bimbo, A., et al. Pattern Recognition. ICPR International Workshops and Challenges. ICPR 2021. Lecture Notes in Computer Science(), vol 12663. Springer, Cham. https://doi.org/10.1007/978-3-030-68796-0_1
Download citation
DOI: https://doi.org/10.1007/978-3-030-68796-0_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-68795-3
Online ISBN: 978-3-030-68796-0
eBook Packages: Computer ScienceComputer Science (R0)