Nothing Special   »   [go: up one dir, main page]

Skip to main content

Desiderata for Explainable AI in Statistical Production Systems of the European Central Bank

  • Conference paper
  • First Online:
Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD 2021)

Abstract

Explainable AI constitutes a fundamental step towards establishing fairness and addressing bias in algorithmic decision-making. Despite the large body of work on the topic, the benefit of solutions is mostly evaluated from a conceptual or theoretical point of view and the usefulness for real-world use cases remains uncertain. In this work, we aim to state clear user-centric desiderata for explainable AI reflecting common explainability needs experienced in statistical production systems of the European Central Bank. We link the desiderata to archetypical user roles and give examples of techniques and methods which can be used to address the user’s needs. To this end, we provide two concrete use cases from the domain of statistical data production in central banks: the detection of outliers in the Centralised Securities Database and the data-driven identification of data quality checks for the Supervisory Banking data system.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    We refer to the concept of personas as it is used in approaches like design thinking.

  2. 2.

    Please refer to some of the paper referenced in related work for comprehensive surveys.

  3. 3.

    We hypothesise that this is probably one of the most challenging fields for xAI research.

References

  1. Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018)

    Google Scholar 

  2. Amarasinghe, K., Rodolfa, K., Lamba, H., Ghani, R.: Explainable machine learning for public policy: use cases, gaps, and research directions (2020)

    Google Scholar 

  3. Arrieta, A.B., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020)

    Article  Google Scholar 

  4. Belle, V., Papantonis, I.: Principles and practice of explainable machine learning (2020)

    Google Scholar 

  5. Bhatt, U., et al.: Explainable machine learning in deployment. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT* 2020, pp. 648–657. Association for Computing Machinery, New York, NY, USA (2020)

    Google Scholar 

  6. Burkov, A.: Machine Learning Engineering, 1 edn. Kindle Direct Publishing (2020)

    Google Scholar 

  7. Caruana, R., Kangarloo, H., Dionisio, J.D., Sinha, U., Johnson, D.: Case-based explanation of non-case-based learning methods. In: Proceedings AMIA Symposium, pp. 212–215 (1999)

    Google Scholar 

  8. Caruana, R., Niculescu-Mizil, A.: An empirical comparison of supervised learning algorithms. In: Proceedings of the 23rd International Conference on Machine Learning, ICML 2006, pp. 161–168. Association for Computing Machinery, New York, NY, USA (2006)

    Google Scholar 

  9. Diethe, T., Borchert, T., Thereska, E., Balle, B., Lawrence, N.: Continual learning in practice (2019)

    Google Scholar 

  10. Dorogush, A.V., Ershov, V., Gulin, A.: CatBoost: gradient boosting with categorical features support (2018)

    Google Scholar 

  11. Doshi-Velez, F., Kim, B.: Towards a rigorous science of interpretable machine learning (2017)

    Google Scholar 

  12. Edwards, L., Veale, M.: Enslaving the algorithm: from a “right to an explanation’’ to a “right to better decisions’’? IEEE Secur. Privacy 16(3), 46–54 (2018)

    Article  Google Scholar 

  13. Reform of EU data protection rules. https://ec.europa.eu/commission/sites/beta-political/files/data-protection-factsheet-changes_en.pdf

  14. Friedman, J.H.: Greedy function approximation: a gradient boosting machine. Ann. Stat. 29(5), 1189–1232 (2001)

    Article  MathSciNet  Google Scholar 

  15. Geurts, P., Ernst, D., Wehenkel, L.: Extremely randomized trees. Mach. Learn. 63(1), 3–42 (2006)

    Article  Google Scholar 

  16. Gilpin, L.H., Bau, D., Yuan, B.Z., Bajwa, A., Specter, M., Kagal, L.: Explaining explanations: an overview of interpretability of machine learning. In: 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), pp. 80–89 (2018)

    Google Scholar 

  17. Guidotti, R., Monreale, A., Ruggieri, S., Pedreschi, D., Turini, F., Giannotti, F.: Local rule-based explanations of black box decision systems (2018)

    Google Scholar 

  18. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. 51(5) (2018)

    Google Scholar 

  19. Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning. SSS. Springer, New York (2009). https://doi.org/10.1007/978-0-387-84858-7

  20. Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. In: NIPS Deep Learning and Representation Learning Workshop (2015). http://arxiv.org/abs/1503.02531

  21. Jesus, S., et al.: How can I choose an explainer? An application-grounded evaluation of post-hoc explanations. In: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2021, pp. 805–815. Association for Computing Machinery, New York, NY, USA (2021)

    Google Scholar 

  22. Karimi, A.H., Schölkopf, B., Valera, I.: Algorithmic recourse: from counterfactual explanations to interventions. In: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2021, pp. 353–362. Association for Computing Machinery, New York, NY, USA (2021)

    Google Scholar 

  23. Lipton, Z.C.: The mythos of model interpretability in machine learning, the concept of interpretability is both important and slippery. Queue 16(3), 31–57 (2018)

    Google Scholar 

  24. Lundberg, S.M., et al.: Explainable AI for trees: from local explanations to global understanding (2019)

    Google Scholar 

  25. Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Guyon, I., et al. (eds.) Advances in Neural Information Processing Systems, vol. 30, pp. 4765–4774. Curran Associates, Inc. (2017)

    Google Scholar 

  26. Miller, T., Howe, P., Sonenberg, L.: Explainable AI: beware of inmates running the asylum or: how I learnt to stop worrying and love the social and behavioural sciences (2017)

    Google Scholar 

  27. Mittelstadt, B., Russell, C., Wachter, S.: Explaining explanations in AI. In: Proceedings of the Conference on Fairness, Accountability, and Transparency - FAT* 2019 (2019)

    Google Scholar 

  28. Mothilal, R.K., Sharma, A., Tan, C.: Explaining machine learning classifiers through diverse counterfactual explanations. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT* 2020, pp. 607–617. Association for Computing Machinery, New York, NY, USA (2020)

    Google Scholar 

  29. Navarro, C.M., Kanellos, G., Martinez-Heras, J., Micheler, J., Gottron, T.: Introducing explainable supervised machine learning into interactive feedback loops for statistical production systems. The Irving Fisher Committee on Central Bank Statistics (IFC) (2021). (to appear)

    Google Scholar 

  30. Pedregosa, F., et al.: Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011)

    MathSciNet  MATH  Google Scholar 

  31. Prokhorenkova, L., Gusev, G., Vorobev, A., Dorogush, A.V., Gulin, A.: CatBoost: unbiased boosting with categorical features (2019)

    Google Scholar 

  32. Pérez, A.C., Huerga, J.: The centralised securities database (CSDB) - standardised micro data for financial stability purposes. In: Settlements, B.F.I. (ed.) Combining micro and macro data for financial stability analysis, vol. 41. Bank for International Settlements (2016). https://EconPapers.repec.org/RePEc:bis:bisifc:41--15

  33. Quiñonero-Candela, J., Sugiyama, M., Lawrence, N.D., Schwaighofer, A.: Dataset shift in machine learning. MIT Press, Cambridge (2009)

    Google Scholar 

  34. Ribeiro, M.T., Singh, S., Guestrin, C.: Model-agnostic interpretability of machine learning (2016)

    Google Scholar 

  35. Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should I trust you?”: explaining the predictions of any classifier (2016)

    Google Scholar 

  36. Ribeiro, M.T., Singh, S., Guestrin, C.: Anchors: high-precision model-agnostic explanations. In: AAAI (2018)

    Google Scholar 

  37. Romano, S., Martinez-Heras, J., Raponi, F.N., Guidi, G., Gottron, T.: Discovering new plausibility checks for supervisory data - a machine learning approach. ECB Statistical Paper Series (2021). (to appear)

    Google Scholar 

  38. Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead (2019)

    Google Scholar 

  39. Selbst, A.D., Barocas, S.: The intuitive appeal of explainable machines. Fordham Law Rev. 1085(87), 2825–2830 (2018)

    Google Scholar 

  40. Unceta, I., Nin, J., Pujol, O.: Towards global explanations for credit risk scoring (2018)

    Google Scholar 

  41. Unceta, I., Nin, J., Pujol, O.: Copying machine learning classifiers. IEEE Access 8, 160268–160284 (2020)

    Google Scholar 

  42. Unceta, I., Nin, J., Pujol, O.: Environmental adaptation and differential replication in machine learning. Entropy 22(10) (2020)

    Google Scholar 

  43. Unceta, I., Palacios, D., Nin, J., Pujol, O.: Sampling unknown decision functions to build classifier copies. In: Torra, V., Narukawa, Y., Nin, J., Agell, N. (eds.) Modeling Decisions for Artificial Intelligence, pp. 192–204. Springer International Publishing, Cham (2020). https://doi.org/10.1007/978-3-030-85529-1

  44. Ustun, B., Spangher, A., Liu, Y.: Actionable recourse in linear classification. In: Proceedings of the Conference on Fairness, Accountability, and Transparency, FAT* 2019, pp. 10–19. Association for Computing Machinery, New York, NY, USA (2019)

    Google Scholar 

  45. Zhang, Y., Haghani, A.: A gradient boosting method to improve travel time prediction. Transp. Res. Part C-Emerg. Technol. 58, 308–324 (2015)

    Article  Google Scholar 

Download references

Acknowledgements

This work was partially funded by the European Commission under contract numbers NoBIAS - H2020-MSCA—ITN-2019 project GA No. 860630.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Carlos Mougan Navarro .

Editor information

Editors and Affiliations

Additional information

Disclaimer: This paper should not be reported as representing the views of the European Central Bank (ECB). The views expressed are those of the authors and do not necessarily reflect those of the ECB.

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Navarro, C.M., Kanellos, G., Gottron, T. (2021). Desiderata for Explainable AI in Statistical Production Systems of the European Central Bank. In: Kamp, M., et al. Machine Learning and Principles and Practice of Knowledge Discovery in Databases. ECML PKDD 2021. Communications in Computer and Information Science, vol 1524. Springer, Cham. https://doi.org/10.1007/978-3-030-93736-2_42

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-93736-2_42

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-93735-5

  • Online ISBN: 978-3-030-93736-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics