Abstract
This paper presents an open-source Python toolbox called Ensemble Feature Importance (EFI) to provide machine learning (ML) researchers, domain experts, and decision makers with robust and accurate feature importance quantification and more reliable mechanistic interpretation of feature importance for prediction problems using fuzzy sets. The toolkit was developed to address uncertainties in feature importance quantification and lack of trustworthy feature importance interpretation due to the diverse availability of machine learning algorithms, feature importance calculation methods, and dataset dependencies. EFI merges results from multiple machine learning models with different feature importance calculation approaches using data bootstrapping and decision fusion techniques, such as mean, majority voting and fuzzy logic. The main attributes of the EFI toolbox are: (i) automatic optimisation of ML algorithms, (ii) automatic computation of a set of feature importance coefficients from optimised ML algorithms and feature importance calculation techniques, (iii) automatic aggregation of importance coefficients using multiple decision fusion techniques, and (iv) fuzzy membership functions that show the importance of each feature to the prediction task. The key modules and functions of the toolbox are described, and a simple example of their application is presented using the popular Iris dataset.
A. Kumar and J. M. Mase—Equally contributed to the work.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Arrieta, A.B., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020)
Arya, V., et al.: One explanation does not fit all: a toolkit and taxonomy of AI explainability techniques (2019). https://arxiv.org/abs/1909.03012
Baniecki, H., Kretowicz, W., Piatyszek, P., Wisniewski, J., Biecek, P.: Dalex: responsible machine learning with interactive explainability and fairness in Python. J. Mach. Learn. Res. 22(214), 1–7 (2021). http://jmlr.org/papers/v22/20-1473.html
Bobek, S., Bałaga, P., Nalepa, G.J.: Towards model-agnostic ensemble explanations. In: Paszynski, M., Kranzlmüller, D., Krzhizhanovskaya, V.V., Dongarra, J.J., Sloot, P.M.A. (eds.) ICCS 2021. LNCS, vol. 12745, pp. 39–51. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-77970-2_4
Feurer, M., Eggensperger, K., Falkner, S., Lindauer, M., Hutter, F.: Auto-sklearn 2.0: Hands-free automl via meta-learning. arXiv:2007.04074 (2020)
Gille, F., Jobin, A., Ienca, M.: What we talk about when we talk about trust: theory of trust for AI in healthcare. Intell.-Based Med. 1–2, 100001 (2020). https://doi.org/10.1016/j.ibmed.2020.100001. https://www.sciencedirect.com/science/article/pii/S2666521220300016
Google: Auto ml tables. https://cloud.google.com/automl-tables/docs Accessed June 2022
Huynh-Thu, V.A., Geurts, P.: Optimizing model-agnostic random subspace ensembles. arXiv preprint arXiv:2109.03099 (2021)
Klaise, J., Looveren, A.V., Vacanti, G., Coca, A.: Alibi explain: algorithms for explaining machine learning models. J. Mach. Learn. Res. 22(181), 1–7 (2021). http://jmlr.org/papers/v22/21-0017.html
Nori, H., Jenkins, S., Koch, P., Caruana, R.: Interpretml: A unified framework for machine learning interpretability. arXiv preprint arXiv:1909.09223 (2019)
Reddy, S., Allan, S., Coghlan, S., Cooper, P.: A governance model for the application of AI in health care. J. Am. Med. Inf. Assoc.: JAMIA 27(3), 491–497 (2019)
Rengasamy, D., Mase, J.M., Torres, M.T., Rothwell, B., Winkler, D.A., Figueredo, G.P.: Mechanistic interpretation of machine learning inference: a fuzzy feature importance fusion approach. arXiv preprint arXiv:2110.11713 (2021)
Rengasamy, D., Rothwell, B.C., Figueredo, G.P.: Towards a more reliable interpretation of machine learning outputs for safety-critical systems using feature importance fusion. Appl. Sci. 11(24), 11854 (2021)
Ribeiro, M.T., Singh, S., Guestrin, C.: Model-agnostic interpretability of machine learning. arXiv preprint arXiv:1606.05386 (2016)
Ruyssinck, J., Huynh-Thu, V.A., Geurts, P., Dhaene, T., Demeester, P., Saeys, Y.: NIMEFI: gene regulatory network inference using multiple ensemble feature importance algorithms. PLoS ONE 9(3), e92709 (2014)
Wang, Y., et al.: Espresso: a fast end-to-end neural speech recognition toolkit. In: 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU) (2019)
Zadeh, L.A.: Fuzzy logic and approximate reasoning. Synthese 30(3), 407–428 (1975)
Zhai, B., Chen, J.: Development of a stacked ensemble model for forecasting and analyzing daily average PM2. 5 concentrations in Beijing, China. Sci. Total Environ. 635, 644–658 (2018)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Kumar, A. et al. (2023). EFI: A Toolbox for Feature Importance Fusion and Interpretation in Python. In: Nicosia, G., et al. Machine Learning, Optimization, and Data Science. LOD 2022. Lecture Notes in Computer Science, vol 13811. Springer, Cham. https://doi.org/10.1007/978-3-031-25891-6_19
Download citation
DOI: https://doi.org/10.1007/978-3-031-25891-6_19
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-25890-9
Online ISBN: 978-3-031-25891-6
eBook Packages: Computer ScienceComputer Science (R0)