Nothing Special   »   [go: up one dir, main page]

Skip to main content

Bankruptcy Prediction for Banks: An Artificial Intelligence Approach to Improve Understandability

  • Chapter
Artificial Intelligence, Evolutionary Computing and Metaheuristics

Abstract

Artificial Intelligence (AI) is a prominent field within Computer Science whose main goal is automatic problem solving. Some of the foundations of this area were established by Alan M. Turing in his two seminal papers about machine intelligence [39] and [40]. Machine Learning (ML) is an important branch within the AI field which currently is on an intensive stage of development due to its wide range of applications. In particular, ML techniques have recently gained recognition in finance, since they are capable to produce useful models. However, the difficulty, and even the impossibility, to interpret these models, has limited the use of ML techniques in some problems where the interpretability is an important issue. Bankruptcy prediction for banks is a task which demands understandability of the solution. Furthermore, the analysis of the features (input variables), to create prediction models, provides better knowledge about the conditions which may trigger bank defaults. The selection of meaningful features before executing the learning process is beneficial since it reduces the dimensionality of the data by decreasing the size of the hypothesis space. As a result, a compact representation is obtained which is easier to interpret. The main contributions of this work are: first, the use of the evolutionary technique called Multi-Population Evolving Decision Rules MP-EDR to determine the relevance of some features from Federal Deposit Insurance Corporation (FDIC) data to predict bank bankruptcy. The second contribution is the representation of the features’ relevance by means of a network which has been built by using the rules and conditions produced by MP-EDR. Such representation is useful to disentangle the relationships between features in the model, this representation is aided by metrics which are used to measure the relevance of such features.

The views expressed here are those of the authors and do not represent the views of Banco de Mexico

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Blum, A., Langley, P.: Selection of relevant features and examples in machine learning. Artificial Intelligence 97(1-2), 245–271

    Google Scholar 

  2. Breiman, L., Friedman, J.H., Olshen, R.A., Stone, C.J.: Classification and regression trees. Wadsworth International Group, United States of America (1984)

    Google Scholar 

  3. Cao, L.J., Tay, F.E.H.: Feature Selection for Support Vector Machines in Financial Time Series Forecasting. In: Leung, K.-S., Chan, L., Meng, H. (eds.) IDEAL 2000. LNCS, vol. 1983, pp. 268–273. Springer, Heidelberg (2000)

    Chapter  Google Scholar 

  4. Dash, M., Liu, H.: Feature selection for classification. Intelligent Data Analysis 1(1-4), 131–156 (1997)

    Article  Google Scholar 

  5. Efron, B.: Bootstrap methods: Another look at the jackknife. The Annals of Statistics 7(1), 1–26 (1979)

    Article  MathSciNet  MATH  Google Scholar 

  6. Forman, G.: An extensive empirical study of feature selection metrics for text classification. Journal of Machine Learning Research 3, 1289–1305 (2003)

    MATH  Google Scholar 

  7. Garcia-Almanza, A.L., Alexandrova-Kabadjova, B., Martinez-Jaramillo, S.: Understanding bank failure: A close examination of rules created by genetic programming. In: IEEE Electronics Robotics and Automotive Mechanism Congress, CERMA (September 2010)

    Google Scholar 

  8. Garcia-Almanza, A.L., Alexandrova-Kabadjova, B., Martinez-Jaramillo, S.: Understanding bank failure: A close examination of rules created by genetic programming. In: Proceedings of the IEEE The Electronics, Robotics and Automotive Mechanics Conference (CERMA), pp. 34–39. IEEE (2010)

    Google Scholar 

  9. Garcia-Almanza, A.L., Martinez-Jaramillo, S., Alexandrova-Kabadjova, B., Tsang, E.: Early signals for supervisory actions to prevent bank bankruptcy. In: Yap, A. (ed.) Information Systems for Global Financial Markets: Emerging Developments and Effects. IGI Global (2011)

    Google Scholar 

  10. Garcia-Almanza, A.L., Tsang, E.: Evolving decision rules to predict investment opportunities. International Journal of Automation and Computing 05(1), 22–31 (2008)

    Article  Google Scholar 

  11. Garcia-Almanza, A.L., Tsang, E., Galvan-Lopez, E.: Evolving decision rules to discover patterns in financial data sets. Computational Methods in Financial Engineering (2007)

    Google Scholar 

  12. Greiner, M., Pfeiffer, D., Smith, R.D.: Principles and practical application of receiver-operating characteristic analysis for diagnostic tests. Prevent Veterinary Med. 45, 23–41 (2000)

    Article  Google Scholar 

  13. Guyon, I., Elisseeff, A.: An introduction to variable and feature selection. Journal of Machine Learning Research 3, 1157–1182 (2003)

    MATH  Google Scholar 

  14. Guyon, I., Gunn, S., Nikravesh, M., Zadeh, L.A. (eds.): Feature Extraction: Foundations and Applications. STUDFUZZ. Springer, Berlin (2006)

    MATH  Google Scholar 

  15. Hall, M.A., Smith, L.A.: Feature selection for machine learning: Comparing a correlation-based filter approach to the wrapper (1999)

    Google Scholar 

  16. Huang, C.-L., Wang, C.-J.: A GA-based feature selection and parameters optimization for support vector machines. Expert Systems with Applications 31(2), 231–240 (2006)

    Article  Google Scholar 

  17. Jensen, R., Shen, Q. (eds.): Computational Intelligence and Feature Selection: Rough and Fuzzy Approaches. IEEE Press Series on Computational Intelligence. Wiley and sons Inc., Hoboken (2008)

    Google Scholar 

  18. John, G.H., Kohavi, R., Pfleger, K.: Irrelevant features and the subset selection problem, pp. 121–129 (1994)

    Google Scholar 

  19. Kim, Y.S., Street, N., Menczer, F.: Feature selection in data mining. In: Wang, J. (ed.) Data Mining: Opportunities and Challenges, pp. 80–105. Idea Group Publishing (2003)

    Google Scholar 

  20. Kohavi, R., John, G.H.: Wrappers for feature subset selection. Artificial Intelligence 97(1-2), 273–324 (1997)

    Article  MATH  Google Scholar 

  21. Koza, J.: Genetic Programming: On the Programming of Computers by Means of Natural Selection. The MIT Press, Cambridge (1992)

    MATH  Google Scholar 

  22. Koza, J.R.: Genetic programming (1997)

    Google Scholar 

  23. Landry, J.-A., Costa, L.D., Bernier, T.: Discriminant feature selection by genetic programming: Towards a domain independent multi-class object detection system. Systemics, Cybernetics and Informatics 3(1), 76–81 (2006)

    Google Scholar 

  24. Langdon, W.B., Poli, R.: Fitness causes bloat. In: Chawdhry, P.K., Roy, R., Pant, R.K. (eds.) Soft Computing in Engineering Design and Manufacturing, pp. 13–22. Springer, London (1997)

    Google Scholar 

  25. Langley, P.: Selection of relevant features in machine learning. In: Proceedings of the AAAI Fall Symposium on Relevance, pp. 140–144. AAAI Press (1994)

    Google Scholar 

  26. Liu, H., Motoda, H. (eds.): Feature Selection for Knowledge Discovery and Data Mining. The Springer International Series in Engineering and Computer Science. Springer, Heidelberg (1998)

    MATH  Google Scholar 

  27. Liu, H., Motoda, H. (eds.): Computational Methods of Feature Selection. Data Mining and Knowledge Discovery Series. Chapman and Hall/Crc, Boca Raton, Florida (2007)

    MATH  Google Scholar 

  28. Liu, H., Motoda, H., Setiono, R., Zhao, Z.: Feature selection: An ever evolving frontier in data mining. Journal of Machine Learning Research - Proceedings Track 10, 4–13 (2010)

    Google Scholar 

  29. Liu, H., Yu, L.: Toward integrating feature selection algorithms for classification and clustering. IEEE Transactions on Knowledge and Data Engineering 17(4), 491–502 (2005)

    Article  Google Scholar 

  30. Liu, Y., Schumann, M.: Data mining feature selection for credit scoring models. Journal of The Operational Research Society 56, 1099–1108 (2005)

    Article  MATH  Google Scholar 

  31. Nordin, P., Francone, F., Banzhaf, W.: Explicitly defined introns and destructive crossover in genetic programming. In: Rosca, J.P. (ed.) Proceedings of the Workshop on Genetic Programming: From Theory to Real-World Applications, Tahoe City, California, USA, July 9, pp. 6–22 (1995)

    Google Scholar 

  32. Okun, O. (ed.): Feature Selection and Ensemble Methods for Bioinformatics: Algorithmic Classification and Implementations. Medical Information Science Reference (2011)

    Google Scholar 

  33. Pitt, E., Nayak, R.: The use of various data mining and feature selection methods in the analysis of a population survey dataset (2007)

    Google Scholar 

  34. Simon, H.A.: Models of Man: Social and Rational. John Wiley and Sons, Inc., New York (1957)

    MATH  Google Scholar 

  35. Simon, H.A.: Models of Bounded Rationality, vol. 2. MIT Press, Cambridge (1982)

    Google Scholar 

  36. Simon, H.A.: Models of Bounded Rationality: Economic Analysis and Public Policy, vol. 1. The MIT Press, Cambridge (1984)

    Google Scholar 

  37. Simon, H.A.: Models of Bounded Rationality: Empirically Grounded Economic Reason, vol. 3. The MIT Press, Cambridge (1997)

    Google Scholar 

  38. Teller, A.: Turing completeness in the language of genetic programming with indexed memory. In: Proceedings of the 1994 IEEE World Congress on Computational Intelligence, June 27-29, pp. 136–141. IEEE Press, Orlando (1994)

    Chapter  Google Scholar 

  39. Turing, A.M.: Intelligent machinery. Report, National Physical Laboratory (1948)

    Google Scholar 

  40. Turing, A.M.: Computing machinery and intelligence. Mind 59, 433–460 (1950)

    Article  MathSciNet  Google Scholar 

  41. Vafaie, H., De Jong, K.: Genetic algorithms as a tool for feature selection in machine learning, pp. 200–204. Society Press (1992)

    Google Scholar 

  42. Wang, X., Yang, J., Teng, X., Xia, W., Jensen, R.: Feature selection based on rough sets and particle swarm optimization. Pattern Recognition Letters 28(4), 459–471 (2007)

    Article  Google Scholar 

  43. Weston, J., Mukherjee, S., Chapelle, O., Pontil, M., Poggio, T., Vapnik, V.: Feature selection for SVMs. In: Leen, T.K., Dietterich, T.G., Tresp, V. (eds.) Advances in Neural Information Processing Systems, vol. 13, pp. 668–674. The MIT Press, Cambride (2001)

    Google Scholar 

  44. Yabuki, T., Iba, H.: Turing-complete data structure for genetic programming. In: IEEE International Conference on Systems, Man and Cybernetics, October 5-8, 2003., pp. 3577–3582. IEEE Press, Washington, D.C (2003)

    Google Scholar 

  45. Yang, J., Honavar, V.: Feature subset selection using a genetic algorithm. IEEE Intelligent Systems 13(2), 44–49 (1998)

    Article  Google Scholar 

  46. Yang, Y., Pedersen, J.O.: A comparative study of feature selection in text categorization. In: ICML 1997: Proceedings of the Fourteenth International Conference on Machine Learning, pp. 412–420. Morgan Kaufmann Publishers, San Francisco (1997)

    Google Scholar 

  47. Yang, Y., Pedersen, J.O.: A comparative study on feature selection in text categorization. In: Proceedings of the Fourteenth International Conference on Machine Learning, ICML 1997, pp. 412–420. Morgan Kaufmann Publishers Inc., San Francisco (1997)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alma Lilia Garcia-Almanza .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag GmbH Berlin Heidelberg

About this chapter

Cite this chapter

Garcia-Almanza, A.L., Alexandrova-Kabadjova, B., Martinez-Jaramillo, S. (2013). Bankruptcy Prediction for Banks: An Artificial Intelligence Approach to Improve Understandability. In: Yang, XS. (eds) Artificial Intelligence, Evolutionary Computing and Metaheuristics. Studies in Computational Intelligence, vol 427. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-29694-9_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-29694-9_24

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-29693-2

  • Online ISBN: 978-3-642-29694-9

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics