AI in the Financial Sector: The Line between Innovation, Regulation and Ethical Responsibility
Abstract
:1. Introduction
- AI assistants and chatbots use natural language processing to deliver immediate customer support and provide customized financial advice.
- AI technologies help banks and credit lenders make wiser underwriting decisions by using a range of factors to analyze historically underserved borrowers.
- AI-powered computers evaluate large, complex datasets faster and more effectively than humans, automating trades through algorithmic processes.
- Financial specialists use AI to spot trends, identify risks, save labor, and ensure better information for future planning. AI and ML are increasingly used to construct more precise, nimble models.
- AI enhances the security of online banking by improving efforts to detect and prevent fraud.
- What are the applications of AI in the banking and finance industries?
- What are the benefits and challenges of AI adoption in these industries?
- What are the current AI regulations and governance?
- What are the types of theories relevant for further research?
2. Overview of AI in Banking/Fintech
2.1. AI Definition
2.2. AI in Finance
2.3. AI Challenges and Ethical Issues
2.3.1. Data Privacy and Security
2.3.2. Bias and Fairness
2.3.3. Accountability and Transparency
2.3.4. Skill Gap
2.4. Integration of AI and Fintech
3. Methodology
4. Applications of AI in Banking and Finance
4.1. Preventing Financial Crimes
4.2. Credit Risk Assessment
4.3. Customer Service
4.4. Investment Management
5. Benefits of AI in Banking and Finance
5.1. Enhanced Efficiency and Cost Reduction
5.2. Improved Decision Making
- AI systems with low potential for harm are allowed to operate alone with little assistance from humans. Still, the AI system needs to be utilized responsibly and be subject to disclosure as well as transparency obligations.
- AI systems must have a suitable degree of human control (full control or supervisory control), depending on the type of system and the industry in which it is employed, to guarantee that judgments enhanced by AI are supervised.
- AI systems that pose a high risk of harm should be closely inspected and closely supervised by humans to prevent them from making decisions on their own that could have unanticipated or harmful consequences.
5.3. Enhanced Customer Experience
6. Challenges
6.1. Data Privacy and Security
6.2. Bias and Fairness
6.3. Accountability and Transparency
6.4. Skills Gap
7. Regulatory Landscape
7.1. Existing AI Regulations and Guidelines
7.2. Cases of AI Governance Framework
8. Summary of Findings
9. Relevant Theories
9.1. TAM
9.2. UTAUT
9.3. Lewin’s Three-Step Change Theory
9.4. Critical Success Factors
- The industry includes factors like product attributes, the technology employed, demand characteristics, etc. These have the potential to impact all competitors in a given industry, but their impact will differ based on the traits and vulnerability of certain industry sectors.
- The competitive positioning and industry history of the business under consideration is based on its competitive strategy.
- Environmental factors are the macroeconomic variables, such as government legislation, economic conditions, and demographics. This has an impact on all competitors in an industry, but they have little or no control over it.
- Temporal factors are aspects of a company that interfere with the execution of a planned strategy for a limited amount of time, such as a lack of managerial experience or skilled personnel.
- Every functional managerial role within a company has a general set of associated important success factors. These are known as managerial positions.
10. Discussion of Limitations and Recommendations
- Developing AI talent and workforce upskilling. Make sure a nation’s workforce is equipped with the digital skills necessary to communicate with AI systems and can adjust to new work practices by collaborating closely with both the private and public sectors.
- Encouraging investment in AI start-ups and supporting the ecosystem for AI innovation. Collaborate closely with both the private and public sectors to establish an environment that is conducive to the development of AI. Allowing businesses to access and utilize digital technologies, infrastructure, and data.
- Investment in the research and advancement of AI. To make sure that the safety and resilience of AI systems or tools progress with new use cases. Stay up to date on the most recent advancements in AI and support research on AI ethics, governance, and cybersecurity.
- Encouraging companies to follow the ASEAN Guide on AI Governance and Ethics by adopting useful tools. Install technologies to make it possible for operations by applying AI governance to guarantee more effective documentation and validation procedures.
- Increase public knowledge of the implications of AI. Educate the public on the possible advantages and risks of artificial intelligence (AI) so that they can utilize it wisely. Also, so they can take the necessary precautions to keep themselves safe from harmful AI applications.
- Establishing an ASEAN Working Group on AI Governance to direct and supervise regional efforts related to AI governance.
- Representatives from each of the ASEAN member states may form the Working Group, which can collaborate to implement the recommendations outlined in this Guide. It will also offer guidance to ASEAN nations that want to adopt specific sections of the Guide, and when necessary, it will consult with other industry partners to obtain their opinions.
- AI risks should include anthropomorphism and mistakes. Replies that are untrue and misleading. Impersonation, deepfakes, fraud, and malevolent actions. Intellectual property rights are being violated. Confidentiality, privacy, and the spread of bias propagation.
- AI governance should include the adaptation of current frameworks and tools. Guidance on creating a framework for shared accountability. Guidance on enhancing the ability to control generative AI risks. Guidance on how to tell the difference between information produced by AI and that produced by humans.
- Assembling a list of use cases showing how ASEAN-based organizations have applied the Guide in real-world situations. A collection of use cases demonstrating these organizations’ dedication to AI governance and aiding in their self-promotion as ethical AI practitioners.
11. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Kate, K. Banking Chatbots Examples and Best Practices for Implementation. 2024. Available online: https://tovie.ai/blog/banking-chatbots-examples-and-best-practices-for-implementation (accessed on 19 July 2024).
- McKendrick, J. AI Adoption Skyrocketed over the Last 18 Months. Harvard Business Review. 2021. Available online: https://hbr.org/2021/09/ai-adoption-skyrocketed-over-the-last-18-months (accessed on 20 May 2024).
- Sultani, W.; Chen, C.; Shah, M. Real-world anomaly detection in surveillance videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 6479–6488. [Google Scholar]
- Gordeev, D.; Singer, P.; Michailidis, M.; Müller, M.; Ambati, S. Backtesting the predictability of COVID-19. arXiv 2020, arXiv:2007.11411. [Google Scholar]
- Portugal, I.; Alencar, P.; Cowan, D. The use of machine learning algorithms in recommender systems: A systematic review. Expert Syst. Appl. 2018, 97, 205–227. [Google Scholar] [CrossRef]
- Suzuki, K. Overview of deep learning in medical imaging. Radiol. Phys. Technol. 2017, 10, 257–273. [Google Scholar] [CrossRef] [PubMed]
- Esteva, A.; Chou, K.; Yeung, S.; Naik, N.; Madani, A.; Mottaghi, A.; Liu, Y.; Topol, E.; Dean, J.; Socher, R. Deep learning-enabled medical computer vision. npj Digit. Med. 2021, 4, 5. [Google Scholar] [CrossRef]
- Conde, V.; Choi, J. Few-shot long-tailed bird audio recognition. arXiv 2022, arXiv:2206.11260. [Google Scholar]
- Conde, V.; Turgutlu, K. CLIP-Art: Contrastive pre-training for fine-grained art classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 3956–3960. [Google Scholar]
- Henkel, C.; Pfeiffer, P.; Singer, P. Recognizing bird species in diverse soundscapes under weak supervision. arXiv 2021, arXiv:2107.07728. [Google Scholar]
- Floridi, L.; Cowls, J.; Beltrametti, M.; Chatila, R.; Chazerand, P.; Dignum, V.; Luetege, C.; Madelin, R.; Pagallo, U.; Rossi, F.; et al. AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds Mach. 2018, 28, 689–707. [Google Scholar] [CrossRef]
- Marr, B. Is Artificial Intelligence dangerous? 6 AI risks everyone should know about. Forbes. 19 November 2018. Available online: https://www.forbes.com/sites/bernardmarr/2018/11/19/is-artificial-intelligence-dangerous-6-ai-risks-everyone-should-know-about/ (accessed on 20 May 2024).
- Gill, N.; Mathur, A.; Conde, V. A brief overview of AI governance for Responsible Machine Learning Systems. arXiv 2022, arXiv:2211.13130. [Google Scholar]
- KPMG. AI Adoption Accelerated during the Pandemic but Many Say It’s Moving too Fast. 2021. Available online: https://info.kpmg.us/news-perspectives/technology-innovation/thriving-in-an-ai-world/ai-adoption-accelerated-during-pandemic.html (accessed on 20 May 2024).
- Zadeh, A. Is probability theory sufficient for dealing with uncertainty in AI: A negative view. Mach. Intell. Pattern Recognit. 1986, 4, 103–116. [Google Scholar]
- Bresina, J.; Dearden, R.; Meuleau, N.; Ramkrishnan, S.; Smith, D.; Washington, R. Planning under continuous time and resource uncertainty: A challenge for AI. arXiv 2012, arXiv:1301.0559. [Google Scholar]
- ASEAN. ASEAN Guide on AI Governance and Ethics. 2024. Available online: https://asean.org/book/asean-guide-on-ai-governance-and-ethics/ (accessed on 20 May 2024).
- Golić, Z. Finance and artificial intelligence: The fifth industrial revolution and its impact on the financial sector. Zb. Rad. Ekon. Fak. Istočnom Sarajev. 2019, 19, 67–81. [Google Scholar] [CrossRef]
- Georgiev, J. Setting the Scene: Digital Technologies in the Financial Sector. 2018. Available online: https://www.jkg-advisory.com/docs/16072018_Finance_5.0 (accessed on 20 May 2024).
- Sharma, S. 10 Artificial Intelligence Applications Revolutionizing Financial Services. 2019. Available online: https://www.datadriveninvestor.com/2019/07/08/10-artificial-intelligence-applications-revolutionizing-financial-services/ (accessed on 20 May 2024).
- Noonan, L. AI in banking: The reality behind the hype. Financial Times. 12 April 2018. Available online: https://www.ft.com/content/b497a134-2d21-11e8-a34a-7e7563b0b0f4 (accessed on 20 May 2024).
- Schroer, A. 36 Examples of AI in Finance. AI Has Revolutionized the Finance Industry. These Examples Show How. 2024. Available online: https://builtin.com/artificial-intelligence/ai-finance-banking-applications-companies (accessed on 20 May 2024).
- Morandín, F. What is Artificial Intelligence? Int. J. Res. Publ. Rev. 2022, 3, 1947–1951. [Google Scholar] [CrossRef]
- Al-Ameri, T.; Hameed, K. Artificial intelligence: Current challenges and future perspectives. Al-Kindy Coll. Med. J. 2023, 19, 3–4. [Google Scholar] [CrossRef]
- Kenchakkanavar, Y. Exploring the Artificial Intelligence Tools: Realizing the Advantages in Education and Research. J. Adv. Libr. Inf. Sci. 2023, 12, 218–224. [Google Scholar]
- Jain, R. Role of artificial intelligence in banking and finance. J. Manag. Sci. 2023, 13, 1–4. [Google Scholar]
- Luger, F. Artificial Intelligence: Structures and Strategies for Complex Problem Solving, 5th ed.; Pearson Education: Noida, India, 1998. [Google Scholar]
- Russell, S.; Norvig, P. Artificial Intelligence: A Modern Approach, 2nd ed.; Pearson Education, Inc.: Upper Saddle River, NJ, USA, 2003. [Google Scholar]
- Zheng, X.; Gildea, E.; Chai, S.; Zhang, T.; Wang, S. Data Science in Finance: Challenges and Opportunities. AI 2023, 5, 55–71. [Google Scholar] [CrossRef]
- Chu, B. Mobile technology and financial inclusion. In Handbook of Blockchain, Digital Finance, and Inclusion; Academic Press: Cambridge, MA, USA, 2018; Volume 1, pp. 131–144. [Google Scholar]
- Killeen, A.; Chan, R. Global financial institutions 2.0. In Handbook of Blockchain, Digital Finance, and Inclusion; Academic Press: Cambridge, MA, USA, 2018; Volume 2, pp. 213–242. [Google Scholar]
- Li, Y.; Yi, J.; Chen, H.; Peng, D. Theory and application of artificial intelligence in financial industry. Data Sci. Financ. Econ. 2021, 1, 96–116. [Google Scholar] [CrossRef]
- Fu, K.; Cheng, D.; Tu, Y.; Zhang, L. Credit card fraud detection using convolutional neural networks. In Neural Information Processing, 23rd International Conference, ICONIP 2016, Kyoto, Japan, 16–21 October 2016; Proceedings, Part III 23; Springer International Publishing: Cham, Switzerland, 2016; pp. 483–490. [Google Scholar]
- Bahnsen, C.; Aouada, D.; Stojanovic, A.; Ottersten, B. Feature engineering strategies for credit card fraud detection. Expert Syst. Appl. 2016, 51, 134–142. [Google Scholar] [CrossRef]
- Sahin, Y.; Bulkan, S.; Duman, E. A cost-sensitive decision tree approach for fraud detection. Expert Syst. Appl. 2013, 40, 5916–5923. [Google Scholar] [CrossRef]
- Bahnsen, C.; Stojanovic, A.; Aouada, D.; Ottersten, B. Cost sensitive credit card fraud detection using Bayes minimum risk. In Proceedings of the 2013 12th International Conference on Machine Learning and Applications, Miami, FL, USA, 4–7 December 2013; Volume 1, pp. 333–338. [Google Scholar]
- Bhingarde, A.; Bangar, A.; Gupta, P.; Karambe, S. Credit card fraud detection using hidden markov model. Int. J. Comput. Sci. Inform. Technol. 2015, 76, 169–170. [Google Scholar]
- Küükkocaolu, G.; Benli, Y.; Küçüksözen, C. Detecting the manipulation of financial information by using artificial neural network models. ISE Rev. 1997, 9, 10–17. [Google Scholar]
- Lin, C.; Chiu, A.; Huang, Y.; Yen, C. Detecting the financial statement fraud: The analysis of the differences between data mining techniques and experts’ judgments. Knowl. Based Syst. 2015, 89, 459–470. [Google Scholar] [CrossRef]
- Albashrawi, M. Detecting financial fraud using data mining techniques: A decade review from 2004 to 2015. J. Data Sci. 2021, 14, 553–570. [Google Scholar]
- Pacelli, V. An artificial neural network approach for credit risk management. J. Intell. Learn. Syst. Appl. 2011, 3, 103–112. [Google Scholar] [CrossRef]
- Khandani, E.; Kim, J.; Lo, W. Consumer credit-risk models via machine-learning algorithms. J. Bank. Financ. 2010, 34, 2767–2787. [Google Scholar] [CrossRef]
- Yu, L.; Yue, W.; Wang, S.; Lai, K. Support vector machine based multiagent ensemble learning for credit risk evaluation. Expert Syst. Appl. 2010, 37, 1351–1360. [Google Scholar] [CrossRef]
- Khashman, A. Credit risk evaluation using neural networks: Emotional versus conventional models. Appl. Soft. Comput. 2011, 11, 5477–5484. [Google Scholar] [CrossRef]
- Lessmann, S.; Baesens, B.; Seow, V.; Thomas, C. Benchmarking state-of-the-art classification algorithms for credit scoring: An update of research. Eur. J. Oper. Res. 2015, 247, 124–136. [Google Scholar] [CrossRef]
- Abellán, J.; Castellano, G. A comparative study on base classifiers in ensemble methods for credit scoring. Expert Syst. Appl. 2017, 73, 1–10. [Google Scholar] [CrossRef]
- Mashrur, A.; Luo, W.; Zaidi, A.; Kelly, A. Machine learning for financial risk management: A survey. IEEE Access 2020, 8, 203203–203223. [Google Scholar] [CrossRef]
- Kumar, V.; Rajan, B.; Venkatesan, R.; Lecinski, J. Understanding the role of artificial intelligence in personalized engagement marketing. Calif. Manag. Rev. 2019, 61, 135–155. [Google Scholar] [CrossRef]
- Timoshenko, A.; Hauser, R. Identifying customer needs from user-generated content. Mark. Sci. 2019, 38, 1–192. [Google Scholar] [CrossRef]
- Jiang, Y.; Wu, F. Development status and regulatory suggestions of intelligent investment consultant. Secur. Mark. Herald. 2016, 293, 4–10. [Google Scholar]
- Yu, J.; Peng, Y. The Application and Challenges of Artificial Intelligence in the Field of Financial Risk Management. South. Financ. 2017, 9, 70–74. [Google Scholar]
- Wang, D. Traditional financial institutions are ready to move. Is it more advantageous to set foot in intelligent investment advisory. Chinas Strateg. Emerg. Ind. 2017, 70–72. [Google Scholar]
- Ponemon, L.; Julian, T.; Lalan, C. IBM & Ponemon Institute Study: Data Breach Costs Rising, Now $4 million per Incident. PR Newswire. 15 June 2016. Available online: https://www.prnewswire.com/news-releases/ibm--ponemon-institute-study-data-breach-costs-rising-now-4-million-per-incident-300284792.html (accessed on 20 May 2024).
- Alvarez, Y.; Leguizamón, A.; Londoño, J. Risks and security solutions existing in the Internet of things (IoT) in relation to Big Data. Ing. Compet. 2020, 23, 9–10. [Google Scholar] [CrossRef]
- Cheng, L. Application status and security risk analysis of artificial intelligence in financial field. Financ. Technol. Era 2016, 2016, 47–49. [Google Scholar]
- Ma, L.; Wei, Y. Application of artificial intelligence technology in financial field: Main difficulties and countermeasures. South. Financ. 2018, 78–84. [Google Scholar]
- Linklaters. AI in Financial Services 3.0 Managing Machines in an Evolving Legal Landscape; Linklaters: London, UK, 2023. [Google Scholar]
- Brummer, C. How international financial law works (and how it doesn’t). Geo. LJ 2010, 99, 257. [Google Scholar]
- Allen, A. Business Challenges with Machine Learning, Machine Learning in Practice. 2018. Available online: https://medium.com/machine-learning-inpractice/business-challenges-with-machine-learning-3d12a32dfd61 (accessed on 19 May 2024).
- Opala, M. 7 Challenges for Machine Learning Projects. 2018. Available online: https://www.netguru.com/blog/7-challenges-for-machine-learningprojects (accessed on 20 May 2024).
- Bathaee, Y. The artificial intelligence black box and the failure of intent and causation. Harv. J. Law Technol. 2017, 31, 889. [Google Scholar]
- OECD. AI Policy Observatory. Catalogue of Tools & Metrics for Trustworthy AI. 2022. Available online: https://oecd.ai/en/catalogue/tools (accessed on 20 May 2024).
- Salesforce. Einstein OCR Model Card. 2024. Available online: https://developer.salesforce.com/docs/analytics/einstein-vision-language/guide/einstein-ocr-model-card.html (accessed on 19 July 2024).
- Deloitte. AI Regulation in the Financial Sector. How to Ensure Financial Institutions’ Accountability; Deloitte Japan: Tokyo, Japan, 2023. [Google Scholar]
- Basel Committee on Banking Supervision. Basel Committee Publishes Work Programme and Strategic Priorities for 2021–2022. 2021. Available online: https://www.bis.org/press/p210416.htm (accessed on 20 May 2024).
- Basel Committee on Banking Supervision. Newsletter on Artificial Intelligence and Machine Learning. 2022. Available online: https://www.bis.org/publ/bcbs_nl27.htm (accessed on 19 May 2024).
- Bank of England. DP5/22—Artificial Intelligence and Machine Learning. 2022. Available online: https://www.bankofengland.co.uk/prudential-regulation/publication/2022/october/artificial-intelligence (accessed on 19 May 2024).
- EIOPA. Artificial Intelligence Governance Principles: Towards Ethical and Trustworthy Artificial Intelligence in the European Insurance Sector: A Report from EIOPA’s Consultative Expert Group on Digital Ethics in Insurance; EIOPA: Frankfurt am Main, Germany, 2021. [Google Scholar]
- NAIC. Exposure Draft of the Model Bulletin on the Use of Algorithms, Predictive Models, and Artificial Intelligence Systems by Insurers 7/17/2023. 2023. Available online: https://content.naic.org/sites/default/files/07172023-exposure-draft-ai-model-bulletin.docx (accessed on 20 May 2024).
- IOSCO. The Use of Artificial Intelligence and Machine Learning by Market Intermediaries and Asset Managers: Final Report. 2021. Available online: https://www.iosco.org/library/pubdocs/pdf/IOSCOPD684.pdf (accessed on 20 May 2024).
- Faraj, S.; Pachidi, S. Beyond Uberization: The co-constitution of technology and organizing. Organ. Theory 2021, 2, 1–14. [Google Scholar] [CrossRef]
- Shaikh, A.; Karjaluoto, H. Mobile banking adoption: A literature review. Telemat. Inform. 2015, 32, 129–142. [Google Scholar] [CrossRef]
- Teo, T.; Faruk, Ö.; Bahçekapili, E. Efficiency of the technology acceptance model to explain pre-service teachers’ intention to use technology: A Turkish study. Campus-Wide Inf. Syst. 2011, 28, 93–101. [Google Scholar] [CrossRef]
- Davis, F.; Sinha, A. Varieties of Uberization: How technology and institutions change the organization(s) of late capitalism. Organ. Theory 2021, 2, 2–17. [Google Scholar] [CrossRef]
- Rogers, M.; Singhal, A.; Quinlan, M. Diffusion of innovations. In An Integrated Approach to Communication Theory and Research; Routledge: London, UK, 2014; pp. 432–448. [Google Scholar]
- Ayub, M.; Zaini, H.; Luan, S.; Jaafar, W. The influence of mobile self-efficacy, personal innovativeness and readiness towards students’ attitudes towards the use of mobile apps in learning and teaching. Int. J. Acad. Res. Bus. Soc. Sci. 2017, 7, 364–374. [Google Scholar]
- Yoon, C.; Lim, D. An empirical study on factors affecting customers’ acceptance of internet-only banks in Korea. Cogent Bus. Manag. 2020, 7, 1792259. [Google Scholar] [CrossRef]
- Arner, W.; Barberis, J.; Buckley, P. 150 years of Fintech: An evolutionary analysis. JASSA 2016, 3, 22–29. [Google Scholar]
- EY. Global FinTech Adoption Index 2019; EY: Singapore, 2019. [Google Scholar]
- Ryu, S. What makes users willing or hesitant to use Fintech? The moderating effect of user type. Ind. Manag. Data Syst. 2018, 118, 541–569. [Google Scholar] [CrossRef]
- Dermody, J.; Yun, J.; Della, V. Innovations to advance sustainability behaviours. Serv. Ind. J. 2019, 39, 1029–1033. [Google Scholar] [CrossRef]
- Yoshino, N.; Morgan, J.; Long, Q. Financial Literacy and Fintech Adoption in Japan; Asian Development Bank Institute: Tokyo, Japan, 2020. [Google Scholar]
- Morgan, J.; Trinh, Q. FinTech and Financial Literacy in Vietnam; ADBI Working Paper Series; Asian Development Bank Institute: Tokyo, Japan, 2020; pp. 1–23. [Google Scholar]
- Davis, D. A Technology Acceptance Model for Empirically Testing New End-User Information Systems: Theory and Results. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 1985. [Google Scholar]
- Akturan, U.; Tezcan, N. Mobile banking adoption of the youth market: Perceptions and intentions. Mark. Intell. Plan. 2012, 30, 444–459. [Google Scholar] [CrossRef]
- Gidhagen, M.; Gebert, S. Determinants of digitally instigated insurance relationships. Int. J. Bank Mark. 2011, 29, 517–534. [Google Scholar] [CrossRef]
- Hu, Z.; Ding, S.; Li, S.; Chen, L.; Yang, S. Adoption intention of fintech services for bank users: An empirical examination with an extended technology acceptance model. Symmetry 2019, 11, 340. [Google Scholar] [CrossRef]
- Teigland, R.; Siri, S.; Larsson, A.; Puertas, M.; Bogusz, I. Introduction: FinTech and shifting financial system institutions. In The Rise and Development of FinTech; Routledge: London, UK, 2018; pp. 1–18. [Google Scholar]
- Braido, M.; Klein, Z. Digital Entrepreneurship and Institutional Changes: Fintechs in the Brazilian Mobile Payment System. Available online: https://aisel.aisnet.org/confirm2020/20 (accessed on 20 May 2024).
- Almunawar, N.; Anshari, M. Customer acceptance of online delivery platform during the COVID-19 pandemic: The case of Brunei Darussalam. J. Sci. Technol. Policy Manag. 2024, 15, 288–310. [Google Scholar] [CrossRef]
- Bagozzi, R.P. (The legacy of the technology acceptance model and a proposal for a paradigm shift. J. Assoc. Inf. Syst. 2007, 8, 3. [Google Scholar] [CrossRef]
- Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. User acceptance of information technology: Toward a unified view. MIS Q. 2003, 27, 425–478. [Google Scholar] [CrossRef]
- Venkatesh, V.; Thong, J.Y.; Xu, X. Consumer acceptance and use of information technology: Extending the unified theory of acceptance and use of technology. MIS Q. 2012, 36, 157. [Google Scholar] [CrossRef]
- Kritsonis, A. Comparison of change theories. Int. J. Sch. Acad. Intellect. Divers. 2005, 8, 1–7. [Google Scholar]
- Robbins, P. Organisational Behaviour, 10th ed.; Prentice Hall: London, UK, 2003. [Google Scholar]
- Daniel, D.R. Management information crisis. Harv. Bus. Rev. 1961, 39, 111–121. [Google Scholar]
- Rockart, J.F. Chief executives define their own data needs. Harv. Bus. Rev. 1979, 57, 81–93. [Google Scholar]
- Bullen, C.V.; Rockart, J.F. A Primer on Critical Success Factors; Center for Information Systems Research, MIT: Cambridge, MA, USA, 1981. [Google Scholar]
- Grunert, K.G.; Ellegaard, C. The Concept of Key Success Factors: Theory and Method; Mapp Working Paper No 4; Aarhus University: Aarhus, Denmark, 1992; pp. 1–24. [Google Scholar]
- Mintzberg, H. The design school: Reconsidering the basic premises of strategic management. Strateg. Manag. J. 1990, 11, 171–195. [Google Scholar] [CrossRef]
- Mintzberg, H. Strategy formation: Schools of thought. In Perspectives on Strategic Management; Fredrickson, J.W., Ed.; Harper: Grand Rapids, MI, USA, 1990; pp. 105–236. [Google Scholar]
- Anshari, M.; Almunawar, M.N.; Masri, M.; Hamdan, M. Digital marketplace and FinTech to support agriculture sustainability. Energy Procedia 2019, 156, 234–238. [Google Scholar] [CrossRef]
- Hamdan, M.; Anshari, M. Paving the Way for the Development of FinTech Initiatives in ASEAN. In Financial Technology and Disruptive Innovation in ASEAN; IGI Global: Hershey, PA, USA, 2020; pp. 80–107. [Google Scholar]
- Anshari, M.; Almunawar, M.N.; Masri, M. Digital twin: Financial technology’s next frontier of robo-advisor. J. Risk Financ. Manag. 2022, 15, 163. [Google Scholar] [CrossRef]
- Firmansyah, E.A.; Masri, M.; Anshari, M.; Besar, M.H. Factors affecting fintech adoption: A systematic literature review. FinTech. 2022, 2, 21–33. [Google Scholar] [CrossRef]
Establish Internal Governance Measures and Structures | |
Aboitiz Group | -Acknowledges that Al and ML algorithms are essential group assets. It is critical to establish a strategic Al governance framework to guarantee that the programs and algorithms are appropriately managed as well as support the group strategic business daily operations. -Al’s use-related ethical issues were consistent with company values. -Roles and duties for the moral application of Al technology should be clearly stated. -The management committee must review and approve all Al-related procedures and decisions. -Model governance management committee consists of:
|
Smart Nation Group (SNG), Singapore | -Gates of approval for various LLM (large language models) for product development phases. Singapore’s National AI Office (NAIO) has set up standards for government product teams creating custom LLM products. It has also formed an Al workgroup of government stakeholders to supervise the product’s safety. -From beta testing onward, product teams just need to request approval from the central Al workgroup to promote experimentation and guarantee sufficient review of LLM products. |
Determining the Level of Human Involvement in Al-Augmented Decision Making | |
EY | -EY is dedicated to creating and implementing reliable Al solutions for clients as well as for internal use. -EY evaluates and categorizes the models as high, medium, or low risk using the Al model risk-tiering approach. -Risk tier based on an evaluation of the main risk areas related to Al, including use case design, ethical, data, privacy, algorithmic, performance, compliance, technology, and business risks. -Appropriate monitoring and human oversight are implemented for the Al models according to the risk tier. |
Smart Nation Group (SNG), Singapore | -Advising product teams on necessary mitigating actions, NAIO adopts a risk-based approach. The degree of risk varies according to the Al products. -For instance, AI products that are intended for public use should be strengthened against hostile attacks. The product teams’ efforts to strengthen the robustness of their products, such as through robustness tests to enhance performance to prevent users from using brute-force attacks, are among the corresponding mitigating measures. |
Operations Management through Documenting Data Lineage, Ensuring Data Quality, and Mitigating Bias | |
UCARE.AI | -UCARE.AI is a deep-tech start-up established in Singapore that offers real-time predictive insights that can be used in the healthcare industry and other fields. Its unique, multi-award-winning online ML and Al platform is built on a cloud-based microservices architecture. -Collected data in a safe, centralized log storage system and recorded data consistently. -The organization took care to ensure that the data were of high quality and that it were presented correctly to build Al models. -Placed a high priority on developing Al models that were unique for their clients rather than relying on third-party data for model construction. -Patient bill estimates were more accurate as a result of this approach, which distinguished between patient profiles and the features chosen for each Al model differently for every hospital. -Reducing the possibility of bias. Patients received personalized, data-driven estimates of their medical bills from objective and reliable machine projections rather than those that were influenced by human biases in the creation of the algorithms. |
Stakeholder Interaction and Communication | |
Gojek | -Gojek is a digital payment group and on-demand multi-service platform based in Jakarta, Indonesia. Al is used, within financial restrictions, to build the user base and keep customers engaged by allocating promotions automatically. -With incentives, automated promotion allocation finds users with high incremental engagement and prioritizes promotion distribution while projecting campaign costs. -Campaign managers deploy promotional campaigns that customers can interact with and they offer implicit commentary about the applicability of campaigns, which is reflected in online metrics models. -The Data Science team and Campaign Managers can make decisions regarding model version management because of this mechanism. |
AI Applications | Benefits | Challenges and Ethical Issues |
---|---|---|
|
|
|
Existing Regulations | Strategies for an AI Governance Framework in an Organization |
---|---|
| Establish internal governance measures and structures.
Al is used, within financial restrictions, to build the user base and keep customers engaged by allocating promotions automatically.
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ridzuan, N.N.; Masri, M.; Anshari, M.; Fitriyani, N.L.; Syafrudin, M. AI in the Financial Sector: The Line between Innovation, Regulation and Ethical Responsibility. Information 2024, 15, 432. https://doi.org/10.3390/info15080432
Ridzuan NN, Masri M, Anshari M, Fitriyani NL, Syafrudin M. AI in the Financial Sector: The Line between Innovation, Regulation and Ethical Responsibility. Information. 2024; 15(8):432. https://doi.org/10.3390/info15080432
Chicago/Turabian StyleRidzuan, Nurhadhinah Nadiah, Masairol Masri, Muhammad Anshari, Norma Latif Fitriyani, and Muhammad Syafrudin. 2024. "AI in the Financial Sector: The Line between Innovation, Regulation and Ethical Responsibility" Information 15, no. 8: 432. https://doi.org/10.3390/info15080432
APA StyleRidzuan, N. N., Masri, M., Anshari, M., Fitriyani, N. L., & Syafrudin, M. (2024). AI in the Financial Sector: The Line between Innovation, Regulation and Ethical Responsibility. Information, 15(8), 432. https://doi.org/10.3390/info15080432