Abstract
Explainable AI constitutes a fundamental step towards establishing fairness and addressing bias in algorithmic decision-making. Despite the large body of work on the topic, the benefit of solutions is mostly evaluated from a conceptual or theoretical point of view and the usefulness for real-world use cases remains uncertain. In this work, we aim to state clear user-centric desiderata for explainable AI reflecting common explainability needs experienced in statistical production systems of the European Central Bank. We link the desiderata to archetypical user roles and give examples of techniques and methods which can be used to address the user’s needs. To this end, we provide two concrete use cases from the domain of statistical data production in central banks: the detection of outliers in the Centralised Securities Database and the data-driven identification of data quality checks for the Supervisory Banking data system.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
We refer to the concept of personas as it is used in approaches like design thinking.
- 2.
Please refer to some of the paper referenced in related work for comprehensive surveys.
- 3.
We hypothesise that this is probably one of the most challenging fields for xAI research.
References
Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018)
Amarasinghe, K., Rodolfa, K., Lamba, H., Ghani, R.: Explainable machine learning for public policy: use cases, gaps, and research directions (2020)
Arrieta, A.B., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020)
Belle, V., Papantonis, I.: Principles and practice of explainable machine learning (2020)
Bhatt, U., et al.: Explainable machine learning in deployment. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT* 2020, pp. 648–657. Association for Computing Machinery, New York, NY, USA (2020)
Burkov, A.: Machine Learning Engineering, 1 edn. Kindle Direct Publishing (2020)
Caruana, R., Kangarloo, H., Dionisio, J.D., Sinha, U., Johnson, D.: Case-based explanation of non-case-based learning methods. In: Proceedings AMIA Symposium, pp. 212–215 (1999)
Caruana, R., Niculescu-Mizil, A.: An empirical comparison of supervised learning algorithms. In: Proceedings of the 23rd International Conference on Machine Learning, ICML 2006, pp. 161–168. Association for Computing Machinery, New York, NY, USA (2006)
Diethe, T., Borchert, T., Thereska, E., Balle, B., Lawrence, N.: Continual learning in practice (2019)
Dorogush, A.V., Ershov, V., Gulin, A.: CatBoost: gradient boosting with categorical features support (2018)
Doshi-Velez, F., Kim, B.: Towards a rigorous science of interpretable machine learning (2017)
Edwards, L., Veale, M.: Enslaving the algorithm: from a “right to an explanation’’ to a “right to better decisions’’? IEEE Secur. Privacy 16(3), 46–54 (2018)
Reform of EU data protection rules. https://ec.europa.eu/commission/sites/beta-political/files/data-protection-factsheet-changes_en.pdf
Friedman, J.H.: Greedy function approximation: a gradient boosting machine. Ann. Stat. 29(5), 1189–1232 (2001)
Geurts, P., Ernst, D., Wehenkel, L.: Extremely randomized trees. Mach. Learn. 63(1), 3–42 (2006)
Gilpin, L.H., Bau, D., Yuan, B.Z., Bajwa, A., Specter, M., Kagal, L.: Explaining explanations: an overview of interpretability of machine learning. In: 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), pp. 80–89 (2018)
Guidotti, R., Monreale, A., Ruggieri, S., Pedreschi, D., Turini, F., Giannotti, F.: Local rule-based explanations of black box decision systems (2018)
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. 51(5) (2018)
Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning. SSS. Springer, New York (2009). https://doi.org/10.1007/978-0-387-84858-7
Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. In: NIPS Deep Learning and Representation Learning Workshop (2015). http://arxiv.org/abs/1503.02531
Jesus, S., et al.: How can I choose an explainer? An application-grounded evaluation of post-hoc explanations. In: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2021, pp. 805–815. Association for Computing Machinery, New York, NY, USA (2021)
Karimi, A.H., Schölkopf, B., Valera, I.: Algorithmic recourse: from counterfactual explanations to interventions. In: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2021, pp. 353–362. Association for Computing Machinery, New York, NY, USA (2021)
Lipton, Z.C.: The mythos of model interpretability in machine learning, the concept of interpretability is both important and slippery. Queue 16(3), 31–57 (2018)
Lundberg, S.M., et al.: Explainable AI for trees: from local explanations to global understanding (2019)
Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Guyon, I., et al. (eds.) Advances in Neural Information Processing Systems, vol. 30, pp. 4765–4774. Curran Associates, Inc. (2017)
Miller, T., Howe, P., Sonenberg, L.: Explainable AI: beware of inmates running the asylum or: how I learnt to stop worrying and love the social and behavioural sciences (2017)
Mittelstadt, B., Russell, C., Wachter, S.: Explaining explanations in AI. In: Proceedings of the Conference on Fairness, Accountability, and Transparency - FAT* 2019 (2019)
Mothilal, R.K., Sharma, A., Tan, C.: Explaining machine learning classifiers through diverse counterfactual explanations. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT* 2020, pp. 607–617. Association for Computing Machinery, New York, NY, USA (2020)
Navarro, C.M., Kanellos, G., Martinez-Heras, J., Micheler, J., Gottron, T.: Introducing explainable supervised machine learning into interactive feedback loops for statistical production systems. The Irving Fisher Committee on Central Bank Statistics (IFC) (2021). (to appear)
Pedregosa, F., et al.: Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011)
Prokhorenkova, L., Gusev, G., Vorobev, A., Dorogush, A.V., Gulin, A.: CatBoost: unbiased boosting with categorical features (2019)
Pérez, A.C., Huerga, J.: The centralised securities database (CSDB) - standardised micro data for financial stability purposes. In: Settlements, B.F.I. (ed.) Combining micro and macro data for financial stability analysis, vol. 41. Bank for International Settlements (2016). https://EconPapers.repec.org/RePEc:bis:bisifc:41--15
Quiñonero-Candela, J., Sugiyama, M., Lawrence, N.D., Schwaighofer, A.: Dataset shift in machine learning. MIT Press, Cambridge (2009)
Ribeiro, M.T., Singh, S., Guestrin, C.: Model-agnostic interpretability of machine learning (2016)
Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should I trust you?”: explaining the predictions of any classifier (2016)
Ribeiro, M.T., Singh, S., Guestrin, C.: Anchors: high-precision model-agnostic explanations. In: AAAI (2018)
Romano, S., Martinez-Heras, J., Raponi, F.N., Guidi, G., Gottron, T.: Discovering new plausibility checks for supervisory data - a machine learning approach. ECB Statistical Paper Series (2021). (to appear)
Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead (2019)
Selbst, A.D., Barocas, S.: The intuitive appeal of explainable machines. Fordham Law Rev. 1085(87), 2825–2830 (2018)
Unceta, I., Nin, J., Pujol, O.: Towards global explanations for credit risk scoring (2018)
Unceta, I., Nin, J., Pujol, O.: Copying machine learning classifiers. IEEE Access 8, 160268–160284 (2020)
Unceta, I., Nin, J., Pujol, O.: Environmental adaptation and differential replication in machine learning. Entropy 22(10) (2020)
Unceta, I., Palacios, D., Nin, J., Pujol, O.: Sampling unknown decision functions to build classifier copies. In: Torra, V., Narukawa, Y., Nin, J., Agell, N. (eds.) Modeling Decisions for Artificial Intelligence, pp. 192–204. Springer International Publishing, Cham (2020). https://doi.org/10.1007/978-3-030-85529-1
Ustun, B., Spangher, A., Liu, Y.: Actionable recourse in linear classification. In: Proceedings of the Conference on Fairness, Accountability, and Transparency, FAT* 2019, pp. 10–19. Association for Computing Machinery, New York, NY, USA (2019)
Zhang, Y., Haghani, A.: A gradient boosting method to improve travel time prediction. Transp. Res. Part C-Emerg. Technol. 58, 308–324 (2015)
Acknowledgements
This work was partially funded by the European Commission under contract numbers NoBIAS - H2020-MSCA—ITN-2019 project GA No. 860630.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Additional information
Disclaimer: This paper should not be reported as representing the views of the European Central Bank (ECB). The views expressed are those of the authors and do not necessarily reflect those of the ECB.
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Navarro, C.M., Kanellos, G., Gottron, T. (2021). Desiderata for Explainable AI in Statistical Production Systems of the European Central Bank. In: Kamp, M., et al. Machine Learning and Principles and Practice of Knowledge Discovery in Databases. ECML PKDD 2021. Communications in Computer and Information Science, vol 1524. Springer, Cham. https://doi.org/10.1007/978-3-030-93736-2_42
Download citation
DOI: https://doi.org/10.1007/978-3-030-93736-2_42
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-93735-5
Online ISBN: 978-3-030-93736-2
eBook Packages: Computer ScienceComputer Science (R0)