Nothing Special   »   [go: up one dir, main page]

skip to main content
review-article

Towards Risk-Free Trustworthy Artificial Intelligence: : Significance and Requirements

Published: 01 January 2023 Publication History

Abstract

Given the tremendous potential and influence of artificial intelligence (AI) and algorithmic decision-making (DM), these systems have found wide-ranging applications across diverse fields, including education, business, healthcare industries, government, and justice sectors. While AI and DM offer significant benefits, they also carry the risk of unfavourable outcomes for users and society. As a result, ensuring the safety, reliability, and trustworthiness of these systems becomes crucial. This article aims to provide a comprehensive review of the synergy between AI and DM, focussing on the importance of trustworthiness. The review addresses the following four key questions, guiding readers towards a deeper understanding of this topic: (i) why do we need trustworthy AI? (ii) what are the requirements for trustworthy AI? In line with this second question, the key requirements that establish the trustworthiness of these systems have been explained, including explainability, accountability, robustness, fairness, acceptance of AI, privacy, accuracy, reproducibility, and human agency, and oversight. (iii) how can we have trustworthy data? and (iv) what are the priorities in terms of trustworthy requirements for challenging applications? Regarding this last question, six different applications have been discussed, including trustworthy AI in education, environmental science, 5G-based IoT networks, robotics for architecture, engineering and construction, financial technology, and healthcare. The review emphasises the need to address trustworthiness in AI systems before their deployment in order to achieve the AI goal for good. An example is provided that demonstrates how trustworthy AI can be employed to eliminate bias in human resources management systems. The insights and recommendations presented in this paper will serve as a valuable guide for AI researchers seeking to achieve trustworthiness in their applications.

References

[1]
J. Zhang, M. Z. A. Bhuiyan, X. Yang, A. K. Singh, D. F. Hsu, and E. Luo, “Trustworthy target tracking with collaborative deep reinforcement learning in edgeai-aided iot,” IEEE Transactions on Industrial Informatics, vol. 18, no. 2, pp. 1301–1309, 2022.
[2]
Y. Tai, B. Gao, Q. Li, Z. Yu, C. Zhu, and V. Chang, “Trustworthy and intelligent covid-19 diagnostic iomt through xr and deep-learning-based clinic data access,” IEEE Internet of Things Journal, vol. 8, no. 21, pp. 15965–15976, 2021.
[3]
X. He, Y. Chen, and L. Huang, “Toward a trustworthy classifier with deep CNN: uncertainty estimation meets hyperspectral image,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–15, 2022.
[4]
A. Eusebi, M. Vasek, E. Cockbain, and E. Mariconti, “The ethics of going deep: challenges in machine learning for sensitive security domains,” in Proceedings of the 2022 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW), pp. 533–537, IEEE, Genoa, Italy, June, 2022.
[5]
J. Dastin, “Amazon scraps secret ai recruiting tool that showed bias against women,” in Ethics of Data and Analytics, pp. 296–299, Auerbach Publications, Boca Raton, FL, USA, 2018.
[6]
X. Fernández-Fuentes, T. Pena, and J. C. Cabaleiro, “Digital forensic analysis methodology for private browsing: firefox and chrome on linux as a case study,” Computers and Security, vol. 115, 2022.
[7]
E. Crigger, K. Reinbold, C. Hanson, A. Kao, K. Blake, and M. Irons, “Trustworthy augmented intelligence in health care,” Journal of Medical Systems, vol. 46, no. 2, pp. 12–11, 2022.
[8]
K. A. Crockett, L. Gerber, A. Latham, and E. Colyer, “Building trustworthy ai solutions: a case for practical solutions for small businesses,” IEEE Transactions on Artificial Intelligence, vol. 4, p. 1, 2021.
[9]
S. Thiebes, S. Lins, and A. Sunyaev, “Trustworthy artificial intelligence,” Electronic Markets, vol. 31, no. 2, pp. 447–464, 2021.
[10]
N. Hasani, M. A. Morris, A. Rahmim, R. M. Summers, E. Jones, E. Siegel, and B. Saboury, “Trustworthy artificial intelligence in medical imaging,” PET Clinics, vol. 17, pp. 1–12, 2022.
[11]
C. Huang, Z. Zhang, B. Mao, and X. Yao, “An overview of artificial intelligence ethics,” IEEE Transactions on Artificial Intelligence, vol. 4, no. 4, pp. 799–819, 2023.
[12]
E. Lemonne, Ethics Guidelines For Trustworthy Ai, Future European Commission, Brussels, Belgium, 2001.
[13]
E. Hickman and M. Petrin, “Trustworthy ai and corporate governance: the eu’s ethics guidelines for trustworthy artificial intelligence from a company law perspective,” European Business Organization Law Review, vol. 22, no. 4, pp. 593–625, 2021.
[14]
AIME Planning Team, Artificial Intelligence Measurement and Evaluation at the National institute of Standards and Technology, National Institute of Standards and Technology, Washington, DC, USA, 2021, https://www.nist.gov/news-events/events/2021/06/ai-measurement-and-evaluation-workshop.
[15]
D. Almeida, K. Shmarko, and E. Lomas, “The ethics of facial recognition technologies, surveillance, and accountability in an age of artificial intelligence: a comparative analysis of us, eu, and UK regulatory frameworks,” AI and Ethics, vol. 2, no. 3, pp. 377–387, 2021.
[16]
D. Gunning, M. Stefik, J. Choi, T. Miller, S. Stumpf, and G.-Z. Yang, “Xai—explainable artificial intelligence,” Science Robotics, vol. 4, no. 37, 2019.
[17]
M. R. Islam, M. U. Ahmed, S. Barua, and S. Begum, “A systematic review of explainable artificial intelligence in terms of different application domains and tasks,” Applied Sciences, vol. 12, no. 3, p. 1353, 2022.
[18]
D. Kaur, S. Uslu, K. J. Rittichier, and A. Durresi, “Trustworthy artificial intelligence: a review,” ACM Computing Surveys, vol. 55, no. 2, pp. 1–38, 2022.
[19]
B. Burke, D. Cearley, N. Jones, D. Smith, A. Chandrasekaran, C. Lu, and K. Panetta, Gartner Top 10 Strategic Technology Trends for 2020-smarter with Gartner, 2021.
[20]
A. Fügener, J. Grahl, A. Gupta, and W. Ketter, “Cognitive challenges in human–artificial intelligence collaboration: investigating the path toward productive delegation,” Information Systems Research, vol. 33, no. 2, pp. 678–696, 2022.
[21]
E&T Editorial Staff, “Nursing care robots become more human with improved control method,” 2020, https://eandt.theiet.org/content/articles/2020/01/nursing-care-robots-become-more-human-with-improved-control-method/.
[22]
A. Holzinger, M. Dehmer, F. Emmert-Streib, R. Cucchiara, I. Augenstein, J. D. Ser, W. Samek, I. Jurisica, and N. Díaz-Rodríguez, “Information fusion as an integrative cross-cutting enabler to achieve robust, explainable, and trustworthy medical artificial intelligence,” Information Fusion, vol. 79, pp. 263–278, 2022.
[23]
S. Nazir, D. M. Dickson, and M. U. Akram, “Survey of explainable artificial intelligence techniques for biomedical imaging with deep neural networks,” Computers in Biology and Medicine, vol. 156, 2023.
[24]
C. Radclyffe, M. Ribeiro, and R. H. Wortham, “The assessment list for trustworthy artificial intelligence: a review and recommendations,” Frontiers in Artificial Intelligence, vol. 6, 2023.
[25]
A. F. Markus, J. A. Kors, and P. R. Rijnbeek, “The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey of the terminology, design choices, and evaluation strategies,” Journal of Biomedical Informatics, vol. 113, 2021.
[26]
V. Chamola, V. Hassija, A. R. Sulthana, D. Ghosh, D. Dhingra, and B. Sikdar, “A review of trustworthy and explainable artificial intelligence (xai),” IEEE Access, vol. 11, pp. 78994–79015, 2023.
[27]
N. Emaminejad and R. Akhavian, “Trustworthy ai and robotics: implications for the aec industry,” Automation in Construction, vol. 139, 2022.
[28]
K. Rasheed, A. Qayyum, M. Ghaly, A. Al-Fuqaha, A. Razi, and J. Qadir, “Explainable, trustworthy, and ethical machine learning for healthcare: a survey,” Computers in Biology and Medicine, vol. 149, 2022.
[29]
A. Albahri, A. M. Duhaim, M. A. Fadhel, A. Alnoor, N. S. Baqer, L. Alzubaidi, O. Albahri, A. Alamoodi, J. Bai, A. Salhi, J. Santamaría, C. Ouyang, A. Gupta, Y. Gu, and M. Deveci, “A systematic review of trustworthy and explainable artificial intelligence in healthcare: assessment of quality, bias risk, and data fusion,” Information Fusion, vol. 96, pp. 156–191, 2023.
[30]
G. Li, B. Liu, and H. Zhang, “Quality attributes of trustworthy artificial intelligence in normative documents and secondary studies: a preliminary review,” Computer, vol. 56, no. 4, pp. 28–37, 2023.
[31]
S. Vincent-Lancrin and R. van der Vlies, Trustworthy Artificial Intelligence (Ai) in Education: Promises and Challenges, The Organization for Economic Cooperation and Development, Paris, France, 2020.
[32]
T. Feng, R. Hebbar, N. Mehlman, X. Shi, A. Kommineni, and S. Narayanan, “A review of speech-centric trustworthy machine learning: privacy, safety, and fairness,” APSIPA Transactions on Signal and Information Processing, vol. 12, no. 3, 2023.
[33]
Y.-L. Chou, C. Moreira, P. Bruza, C. Ouyang, and J. Jorge, “Counterfactuals and causability in explainable artificial intelligence: theory, algorithms, and applications,” Information Fusion, vol. 81, pp. 59–83, 2022.
[34]
M. Haenlein and A. Kaplan, “A brief history of artificial intelligence: on the past, present, and future of artificial intelligence,” California Management Review, vol. 61, no. 4, pp. 5–14, 2019.
[35]
L. Floridi and J. Cowls, “A unified framework of five principles for ai in society,” Machine Learning and the City: Applications in Architecture and Urban Design, pp. 535–545, John Wiley & Sons, Hoboken, NJ, USA, 2022.
[36]
S. Russell, “Provably beneficial artificial intelligence,” in Proceedings of the 27th International Conference on Intelligent User Interfaces, p. 3, New York, NY, USA, March, 2022.
[37]
P. Mikalef, K. Conboy, J. E. Lundström, and A. Popovič, “Thinking responsibly about responsible ai and ‘the dark side’of ai,” European Journal of Information Systems, vol. 31, 2022.
[38]
M. Ashok, R. Madan, A. Joha, and U. Sivarajah, “Ethical framework for artificial intelligence and digital technologies,” International Journal of Information Management, vol. 62, 2022.
[39]
L. Floridi, “Establishing the rules for building trustworthy ai,” Nature Machine Intelligence, vol. 1, no. 6, pp. 261–262, 2019.
[40]
J. Chapiro, B. Allen, A. Abajian, B. Wood, N. Kothary, D. Daye, H. Bai, A. Sedrakyan, M. Diamond, and V. Simonyan, “Proceedings from the society of interventional radiology foundation research consensus panel on artificial intelligence in interventional radiology: from code to bedside,” Journal of Vascular and Interventional Radiology, vol. 33, 2022.
[41]
I. Ulnicane, “Artificial intelligence in the European Union: policy, ethics and regulation,” in The Routledge Handbook of European Integrations, Taylor & Francis, Oxfordshire, UK, 2022.
[42]
S. K. Lo, Y. Liu, Q. Lu, C. Wang, X. Xu, H.-Y. Paik, and L. Zhu, “Toward trustworthy AI: blockchain-based architecture design for accountability and fairness of federated learning systems,” IEEE Internet of Things Journal, vol. 10, no. 4, pp. 3276–3284, 2023.
[43]
J. Ma, L. Schneider, S. Lapuschkin, R. Achtibat, M. Duchrau, J. Krois, F. Schwendicke, and W. Samek, “Towards trustworthy ai in dentistry,” Journal of Dental Research, vol. 101, 2022.
[44]
L. Alzubaidi, M. A. Fadhel, O. Al-Shamma, J. Zhang, J. Santamaría, and Y. Duan, “Robust application of new deep learning tools: an experimental study in medical imaging,” Multimedia Tools and Applications, vol. 81, no. 10, pp. 13289–13317, 2022.
[45]
C. Tonkin, “Robodebt was an ai ethics disaster,” 2021, https://ia.acs.org.au/article/2021/robodebt-was-an-ai-ethics-disaster.html.
[47]
I. Ahmed, G. Jeon, and F. Piccialli, “From artificial intelligence to explainable artificial intelligence in industry 4.0: a survey on what, how, and where,” IEEE Transactions on Industrial Informatics, vol. 18, no. 8, pp. 5031–5042, 2022.
[48]
H. Khosravi, S. B. Shum, G. Chen, C. Conati, Y.-S. Tsai, J. Kay, S. Knight, R. Martinez-Maldonado, S. Sadiq, and D. Gašević, “Explainable artificial intelligence in education,” Computers in Education: Artificial Intelligence, vol. 3, 2022.
[49]
A. Rawal, J. McCoy, D. Rawat, B. Sadler, and R. Amant, “Recent advances in trustworthy explainable artificial intelligence: status, challenges and perspectives,” IEEE Transactions on Artificial Intelligence, vol. 3, 2021.
[50]
A. Albahri, Z. Al-qaysi, L. Alzubaidi, A. Alnoor, O. Albahri, A. Alamoodi, and A. A. Bakar, “A systematic review of using deep learning technology in the steady-state visually evoked potential-based brain-computer interface applications: current trends and future trust methodology,” International Journal of Telemedicine and Applications, vol. 2023, 24 pages, 2023.
[51]
M. Velmurugan, C. Ouyang, C. Moreira, and R. Sindhgatta, “Evaluating stability of post-hoc explanations for business process predictions,” in Proceedings of the Service-Oriented Computing-19th International Conference, ICSOC 2021, vol. 13121, pp. 49–64, Dubai, United Arab Emirates, November, 2021.
[52]
A. Selbst and J. Powles, ““Meaningful information” and the right to explanation,” in Proceedings of the Conference on Fairness, Accountability and Transparency, p. 48, PMLR, Atlanta GA USA, January, 2018.
[53]
M. Velmurugan, C. Ouyang, C. Moreira, and R. Sindhgatta, “Evaluating fidelity of explainable methods for predictive process analytics,” in Proceedings of the Intelligent Information Systems- CAISE Forum 2021, vol. 424, pp. 64–72, Melbourne, Australia, June, 2021.
[54]
S. Sreedharan, S. Srivastava, and S. Kambhampati, “Using state abstractions to compute personalized contrastive explanations for ai agent behavior,” Artificial Intelligence, vol. 301, 2021.
[55]
D. Shin, “The effects of explainability and causability on perception, trust, and acceptance: implications for explainable ai,” International Journal of Human-Computer Studies, vol. 146, 2021.
[56]
B. Wickramanayake, C. Ouyang, C. Moreira, and Y. Xu, “Generating purpose-driven explanations: the case of process predictive model inspection,” in Proceedings of the Intelligent Information Systems-CAISE Forum 2022, pp. 120–129, Leuven, Belgium, June, 2022.
[57]
W. X. Lim, Z. Chen, and A. Ahmed, “The adoption of deep learning interpretability techniques on diabetic retinopathy analysis: a review,” Medical and Biological Engineering and Computing, vol. 60, pp. 1–10, 2022.
[58]
Y. Huang, D. Chen, W. Zhao, Y. Lv, and S. Wang, “Deep patch learning algorithms with high interpretability for regression problems,” International Journal of Intelligent Systems, vol. 37, no. 11, pp. 8239–8276, 2022.
[59]
C. Yang, A. Rangarajan, and S. Ranka, “Global model interpretation via recursive partitioning,” in Proceedings of the 2018 IEEE 20th International Conference on High Performance Computing and Communications; IEEE 16th International Conference on Smart City; IEEE 4th International Conference on Data Science and Systems (HPCC/SmartCity/DSS), pp. 1563–1570, IEEE, Exeter, UK, June, 2018.
[60]
C. Moreira, Y. Chou, M. Velmurugan, C. Ouyang, R. Sindhgatta, and P. Bruza, “LINDA-BN: an interpretable probabilistic approach for demystifying black-box predictive models,” Decision Support Systems, vol. 150, 2021.
[61]
D. Lyu, F. Yang, H. Kwon, W. Dong, L. Yilmaz, and B. Liu, “Tdm: trustworthy decision-making via interpretability enhancement,” IEEE Transactions on Emerging Topics in Computational Intelligence, vol. 6, no. 3, pp. 450–461, 2022.
[62]
C. Reed, “How should we regulate artificial intelligence?” Philosophical Transactions of the Royal Society A: Mathematical, Physical & Engineering Sciences, vol. 376, no. 2128, 2018.
[63]
B. Wickramanayake, Z. He, C. Ouyang, C. Moreira, Y. Xu, and R. Sindhgatta, “Building interpretable models for business process prediction using shared and specialised attention mechanisms,” Knowledge-Based Systems, vol. 248, 2022.
[64]
R. Sindhgatta, C. Ouyang, and C. Moreira, “Exploring interpretability for predictive process analytics,” in Proceedings of the Service-Oriented Computing- 18th International Conference, ICSOC 2020, vol. 12571, pp. 439–447, Dubai, United Arab Emirates, December, 2020.
[65]
U. Bhatt, A. Xiang, S. Sharma, A. Weller, A. Taly, Y. Jia, J. Ghosh, R. Puri, J. M. Moura, and P. Eckersley, “Explainable machine learning in deployment,” in Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 648–657, Barcelona, Spain, January, 2020.
[66]
C. Ieracitano, N. Mammone, A. Hussain, and F. C. Morabito, “A novel explainable machine learning approach for eeg-based brain-computer interface systems,” Neural Computing and Applications, vol. 34, no. 14, pp. 11347–11360, 2022.
[67]
G. Ras, N. Xie, M. van Gerven, and D. Doran, “Explainable deep learning: a field guide for the uninitiated,” Journal of Artificial Intelligence Research, vol. 73, pp. 329–397, 2022.
[68]
A. de Waal and J. W. Joubert, “Explainable bayesian networks applied to transport vulnerability,” Expert Systems with Applications, vol. 209, 2022.
[69]
C. Mao, R. Lin, D. Towey, W. Wang, J. Chen, and Q. He, “Trustworthiness prediction of cloud services based on selective neural network ensemble learning,” Expert Systems with Applications, vol. 168, 2021.
[70]
R. Srinivasan and B. San Miguel González, “The role of empathy for artificial intelligence accountability,” Journal of Responsible Technology, vol. 9, 2022.
[71]
A. Choudhury and O. Asan, “Impact of accountability, training, and human factors on the use of artificial intelligence in healthcare: exploring the perceptions of healthcare practitioners in the us,” Human Factors in Healthcare, vol. 2, 2022.
[72]
S. Sharma, Y. S. Rawal, S. Pal, and R. Dani, “Fairness, accountability, sustainability, transparency (fast) of artificial intelligence in terms of hospitality industry,” in ICT Analysis and Applications, pp. 495–504, Springer, Berlin, Germany, 2022.
[73]
M. Wieringa, “What to account for when accounting for algorithms: a systematic literature review on algorithmic accountability,” in Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 1–18, Barcelona, Spain, January, 2020.
[74]
B. S. Cruz and M. de Oliveira Dias, “Crashed boeing 737-max: fatalities or malpractice,” GSJ, vol. 8, pp. 2615–2624, 2020.
[75]
S. Poier, “Clean and green–the volkswagen emissions scandal: failure of corporate governance?” Problemy Ekorozwoju, vol. 15, no. 2, pp. 33–39, 2020.
[76]
R. Schwartz, A. Vassilev, K. Greene, L. Perine, A. Burt, and P. Hall, Towards a Standard for Identifying and Managing Bias in Artificial Intelligence, NIST Special Publication 1270, Gaithersburg, MD, USA, 2022.
[77]
C. M. Gevaert, M. Carman, B. Rosman, Y. Georgiadou, and R. Soden, “Fairness and accountability of ai in disaster risk management: opportunities and challenges,” Patterns, vol. 2, no. 11, 2021.
[78]
F. Königstorfer and S. Thalmann, “Ai documentation: a path to accountability,” Journal of Responsible Technology, vol. 11, 2022.
[79]
I. Rahwan, M. Cebrian, N. Obradovich, J. Bongard, J.-F. Bonnefon, C. Breazeal, J. W. Crandall, N. A. Christakis, I. D. Couzin, M. O. Jackson, N. R. Jennings, E. Kamar, I. M. Kloumann, H. Larochelle, D. Lazer, R. McElreath, A. Mislove, D. C. Parkes, A. Pentland, M. E. Roberts, A. Shariff, J. B. Tenenbaum, and M. Wellman, “Machine behaviour,” Nature, vol. 568, no. 7753, pp. 477–486, 2019.
[80]
J. Morley, L. Floridi, L. Kinsey, and A. Elhalal, “From what to how: an initial review of publicly available ai ethics tools, methods and research to translate principles into practices,” Science and Engineering Ethics, vol. 26, no. 4, pp. 2141–2168, 2020.
[81]
R. Guidotti, A. Monreale, S. Ruggieri, F. Turini, F. Giannotti, and D. Pedreschi, “A survey of methods for explaining black box models,” ACM Computing Surveys, vol. 51, no. 5, pp. 1–42, 2018.
[82]
D. Omeiza, H. Web, M. Jirotka, and L. Kunze, “Towards accountability: providing intelligible explanations in autonomous driving,” in Proceedings of the 2021 IEEE Intelligent Vehicles Symposium (IV), pp. 231–237, IEEE, Nagoya, Japan, July, 2021.
[83]
A. W. Flores, K. Bechtel, and C. T. Lowenkamp, “False positives, false negatives, and false analyses: a rejoinder to machine bias: there’s software used across the country to predict future criminals. and it’s biased against blacks, Fed,” Probation, vol. 80, p. 38, 2016.
[84]
A. Kadambi, “Achieving fairness in medical devices,” Science, vol. 372, no. 6537, pp. 30–31, 2021.
[85]
N. Mehrabi, F. Morstatter, N. Saxena, K. Lerman, and A. Galstyan, “A survey on bias and fairness in machine learning,” ACM Computing Surveys, vol. 54, no. 6, pp. 1–35, 2021.
[86]
M. Madaio, L. Egede, H. Subramonyam, J. Wortman Vaughan, and H. Wallach, “Assessing the fairness of ai systems: ai practitioners’ processes, challenges, and needs for support,” Proceedings of the ACM on Human-Computer Interaction, vol. 6, no. 1, pp. 1–26, 2022.
[87]
C. Rudin, “Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead,” Nature Machine Intelligence, vol. 1, no. 5, pp. 206–215, 2019.
[88]
M. von Zahn, S. Feuerriegel, and N. Kuehl, “The cost of fairness in ai: evidence from e-commerce,” Business & information systems engineering, vol. 64, no. 3, pp. 335–348, 2022.
[89]
I. Pastaltzidis, N. Dimitriou, K. Quezada-Tavarez, S. Aidinlis, T. Marquenie, A. Gurzawska, and D. Tzovaras, “Data augmentation for fairness-aware machine learning: preventing algorithmic bias in law enforcement systems,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, pp. 2302–2314, Seoul, Republic of Korea, June, 2022.
[90]
M. Bogen and A. Rieke, “Help wanted: an examination of hiring algorithms, equity, and bias,” Upturn, December, vol. 7, 2018.
[91]
A. Chouldechova, D. Benavides-Prado, O. Fialko, and R. Vaithianathan, “A case study of algorithm-assisted decision making in child maltreatment hotline screening decisions,” in Proceedings of the Conference on Fairness, Accountability and Transparency, pp. 134–148, PMLR, Atlanta, GA, USA, January, 2018.
[92]
C. Dwork, M. Hardt, T. Pitassi, O. Reingold, and R. Zemel, “Fairness through awareness,” in Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, pp. 214–226, New York, NY, USA, March, 2012.
[93]
L. Oneto and S. Chiappa, “Fairness in machine learning,” in Recent Trends in Learning from Data, pp. 155–196, Springer, Berlin, Germany, 2020.
[94]
M. J. Kusner, J. Loftus, C. Russell, and R. Silva, “Counterfactual fairness,” Advances in Neural Information Processing Systems, vol. 30, 2017.
[95]
V. Grari, S. Lamprier, and M. Detyniecki, “Adversarial learning for counterfactual fairness,” Machine Learning, vol. 112, no. 3, pp. 741–763, 2022.
[96]
F. P. Santos, F. C. Santos, A. Paiva, and J. M. Pacheco, “Evolutionary dynamics of group fairness,” Journal of Theoretical Biology, vol. 378, pp. 96–102, 2015.
[97]
M. M. Khalili, X. Zhang, and M. Abroshan, “Fair sequential selection using supervised learning models,” Advances in Neural Information Processing Systems, vol. 34, pp. 28144–28155, 2021.
[98]
Y. Zheng, S. Wang, and J. Zhao, “Equality of opportunity in travel behavior prediction with deep neural networks and discrete choice models,” Transportation Research Part C: Emerging Technologies, vol. 132, 2021.
[99]
P. Besse, E. del Barrio, P. Gordaliza, J.-M. Loubes, and L. Risser, “A survey of bias in machine learning through the prism of statistical parity,” The American Statistician, vol. 76, no. 2, pp. 188–198, 2022.
[100]
S. Feuerriegel, M. Dolata, and G. Schwabe, “Fair AI: challenges and opportunities,” Business and Information Systems Engineering, vol. 62, no. 4, pp. 379–384, 2020.
[101]
Y. Chen, E. Huerta, J. Duarte, P. Harris, D. S. Katz, M. S. Neubauer, D. Diaz, F. Mokhtar, R. Kansal, S. E. Park, V. V. Kindratenko, Z. Zhao, and R. Rusack, “A fair and ai-ready Higgs boson decay dataset,” Scientific Data, vol. 9, pp. 31–10, 2022.
[102]
European Commission, High-Level Expert Group on Artificial Intelligence, Ethics Guidelines for Trustworthy AI, European Commission, Brussels, Belgium, 2019.
[103]
IEEE, “IEEE standard computer dictionary: a compilation of IEEE standard computer glossaries,” IEEE Std, vol. 610, pp. 1–217, 1991.
[104]
M. Shafique, M. Naseer, T. Theocharides, C. Kyrkou, O. Mutlu, L. Orosa, and J. Choi, “Robust machine learning systems: challenges, current trends, perspectives, and the road ahead,” IEEE Design and Test, vol. 37, no. 2, pp. 30–57, 2020.
[105]
I. Goodfellow, P. McDaniel, and N. Papernot, “Making machine learning robust against adversarial inputs,” Communications of the ACM, vol. 61, no. 7, pp. 56–66, 2018.
[106]
J. Zhang and C. Li, “Adversarial examples: opportunities and challenges,” IEEE Transactions on Neural Networks and Learning Systems, vol. 31, no. 7, pp. 2578–2593, 2020.
[107]
I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” 2014, https://arxiv.org/abs/1412.6572.
[108]
A. Arnab, O. Miksik, and P. H. Torr, “On the robustness of semantic segmentation models to adversarial attacks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 888–897, Salt Lake City, UT, USA, June, 2018.
[109]
S. H. Silva and P. Najafirad, “Opportunities and challenges in deep learning adversarial robustness: a survey,” 2020, https://arxiv.org/abs/2007.00753.
[110]
N. Akhtar, M. Jalwana, M. Bennamoun, and A. S. Mian, “Attack to fool and explain deep networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 10, pp. 5980–5995, 2022.
[111]
N. Carlini, A. Athalye, N. Papernot, W. Brendel, J. Rauber, D. Tsipras, I. Goodfellow, A. Madry, and A. Kurakin, “On evaluating adversarial robustness,” 2019, https://arxiv.org/abs/1902.06705.
[112]
B. dos Santos Silva, C.-T. Lee, R. Williams, B.-Y. Kuo, C.-M. Chang, and S. Muppidi, “Inline detection and prevention of adversarial attacks,” Springer, Berlin, Germany, 2022, US Patent App. 16/952,494.
[113]
A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models resistant to adversarial attacks,” in Proceedings of the 6th International Conference on Learning Representations, ICLR 2018, Vancouver, Canada, April, 2018.
[114]
M. Nicolae, M. Sinn, T. N. Minh, A. Rawat, M. Wistuba, V. Zantedeschi, I. M. Molloy, and B. Edwards, “Adversarial robustness toolbox v0.2.2,” 2018, https://arxiv.org/abs/1807.01069.
[115]
N. Drenkow, N. Sani, I. Shpitser, and M. Unberath, “Robustness in deep learning for computer vision: mind the gap?” 2021, https://arxiv.org/pdf/2112.00639.pdf.
[116]
Z. Luo, C. Zhu, L. Fang, G. Kou, R. Hou, and X. Wang, “An effective and practical gradient inversion attack,” International Journal of Intelligent Systems, vol. 37, no. 11, pp. 9373–9389, 2022.
[117]
D. Hendrycks and T. G. Dietterich, “Benchmarking neural network robustness to common corruptions and perturbations,” in Proceedings of the 7th International Conference on Learning Representations, ICLR 2019, OpenReview.net, New Orleans, LA, USA, May, 2019.
[118]
A. Laugros, A. Caplier, and M. Ospici, “Are adversarial robustness and common perturbation robustness independent attributes?” in Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision Workshops, ICCV Workshops 2019, pp. 1045–1054, IEEE, Seoul, Korea (South), October, 2019.
[119]
V. Mitra, H. Franco, R. M. Stern, J. van Hout, L. Ferrer, M. Graciarena, W. Wang, D. Vergyri, A. Alwan, and J. H. L. Hansen, “Robust features in deep-learning-based speech recognition,” in New Era for Robust Speech Recognition, Exploiting Deep Learning, pp. 187–217, Springer, Berlin, Germany, 2017.
[120]
F. Cartella, O. Anunciação, Y. Funabiki, D. Yamaguchi, T. Akishita, and O. Elshocht, “Adversarial attacks for tabular data: application to fraud detection and imbalanced data,” in Proceedings of the Workshop on Artificial Intelligence Safety 2021 (SafeAI 2021) co-located with the 35th AAAI Conference on Artificial Intelligence (AAAI 2021), vol. 2808, New York, NY, USA, February, 2021.
[121]
F. Karim, S. Majumdar, and H. Darabi, “Adversarial attacks on time series,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 43, no. 10, pp. 3309–3320, 2021.
[122]
F. Taymouri, M. L. Rosa, S. Erfani, Z. D. Bozorgi, and I. Verenich, “Predictive business process monitoring via generative adversarial nets: the case of next event prediction,” in International Conference on Business Process Management, pp. 237–256, Springer, Berlin, Germany, 2020.
[123]
D. Gursoy, O. H. Chi, L. Lu, and R. Nunkoo, “Consumers acceptance of artificially intelligent (ai) device use in service delivery,” International Journal of Information Management, vol. 49, pp. 157–169, 2019.
[124]
H. Choung, P. David, and A. Ross, “Trust in ai and its role in the acceptance of ai technologies,” International Journal of Human-Computer Interaction, vol. 39, no. 9, pp. 1727–1739, 2022.
[125]
C. Nicodeme, “Build confidence and acceptance of ai-based decision support systems-explainable and liable ai,” in Proceedings of the 2020 13th International Conference on Human System Interaction (HSI), pp. 20–23, IEEE, Tokyo, Japan, June, 2020.
[126]
A. Theodorou and V. Dignum, “Towards ethical and socio-legal governance in ai,” Nature Machine Intelligence, vol. 2, no. 1, pp. 10–12, 2020.
[127]
A. L. Ostrom, D. Fotheringham, and M. J. Bitner, “Customer acceptance of ai in service encounters: understanding antecedents and consequences,” in Handbook of Service Science, vol. 2, pp. 77–103, Springer, Berlin, Germany, 2019.
[128]
I. Mezgár and J. Váncza, “From ethics to standards–a path via responsible ai to cyber-physical production systems,” Annual Reviews in Control, vol. 53, pp. 391–404, 2022.
[129]
S. Borau, T. Otterbring, S. Laporte, and S. Fosso Wamba, “The most human bot: female gendering increases humanness perceptions of bots and acceptance of ai,” Psychology and Marketing, vol. 38, no. 7, pp. 1052–1068, 2021.
[130]
V. Braithwaite, “Beyond the bubble that is robodebt: how governments that lose integrity threaten democracy,” Australian Journal of Social Issues, vol. 55, no. 3, pp. 242–259, 2020.
[131]
C. Campione, “The dark nudge era: cambridge analytica, digital manipulation in politics, and the fragmentation of society,” Springer, Berlin, Germany, 2018, Bachelor's Degree Thesis.
[132]
D. Perino, K. Katevas, A. Lutu, E. Marin, and N. Kourtellis, “Privacy-preserving ai for future networks,” Communications of the ACM, vol. 65, no. 4, pp. 52–53, 2022.
[133]
H. Berghel, “Equifax and the latest round of identity theft roulette,” Computer, vol. 50, no. 12, pp. 72–76, 2017.
[134]
C. Greene and J. Stavins, “Did the target data breach change consumer assessments of payment card security?” Journal of Payments Strategy and Systems, vol. 11, pp. 121–133, 2017.
[135]
G. Giordano, F. Palomba, and F. Ferrucci, “On the use of artificial intelligence to deal with privacy in iot systems: a systematic literature review,” Journal of Systems and Software, vol. 193, 2022.
[136]
D. Su, H. T. Huynh, Z. Chen, Y. Lu, and W. Lu, “Re-identification attack to privacy-preserving data analysis with noisy sample-mean,” in Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 1045–1053, California, CA, USA, June, 2020.
[137]
T. Lee, I. M. Molloy, and D. Su, “Protecting cognitive systems from model stealing attacks,” Google Patent, New York, NY, USA, 2021, US Patent 11,023,593.
[138]
H. Chen, S. U. Hussain, F. Boemer, E. Stapf, A. R. Sadeghi, F. Koushanfar, and R. Cammarota, “Developing privacy-preserving ai systems: the lessons learned,” in Proceedings of the 2020 57th ACM/IEEE Design Automation Conference (DAC), pp. 1–4, IEEE, California, CA, USA, July, 2020.
[139]
M. Rosenquist, Defense in Depth Strategy Optimizes Security, Intel Corporation, Santa Clara, CA, USA, 2008.
[140]
G. Kaissis, A. Ziller, J. Passerat-Palmbach, T. Ryffel, D. Usynin, A. Trask, I. Lima, J. Mancuso, F. Jungmann, M.-M. Steinborn, A. Saleh, M. Makowski, D. Rueckert, and R. Braren, “End-to-end privacy preserving deep learning on multi-institutional medical imaging,” Nature Machine Intelligence, vol. 3, no. 6, pp. 473–484, 2021.
[141]
T. Li, S. Xie, Z. Zeng, M. Dong, and A. Liu, “Atps: an ai based trust-aware and privacy-preserving system for vehicle managements in sustainable vanets,” IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 10, pp. 19837–19851, 2022.
[142]
N. Chmait, D. L. Dowe, Y.-F. Li, and D. G. Green, “An information-theoretic predictive model for the accuracy of ai agents adapted from psychometrics,” in Proceedings of the International Conference on Artificial General Intelligence, pp. 225–236, Springer, San Francisco, CA, USA, October, 2017.
[143]
X. Ye, L. Zhao, and L. Wang, “Diagnostic accuracy of endoscopic ultrasound with artificial intelligence for gastrointestinal stromal tumors: a meta-analysis,” Journal of Digestive Diseases, vol. 23, no. 5-6, pp. 253–261, 2022.
[144]
C. Lin, T. Chau, C.-S. Lin, H.-S. Shang, W.-H. Fang, D.-J. Lee, C.-C. Lee, S.-H. Tsai, C.-H. Wang, and S.-H. Lin, “Point-of-care artificial intelligence-enabled ecg for dyskalemia: a retrospective cohort analysis for accuracy and outcome prediction,” NPJ digital medicine, vol. 5, pp. 8–12, 2022.
[145]
B. Haibe-Kains, G. A. Adam, A. Hosny, F. Khodakarami, T. Shraddha, R. Kusko, S. A. Sansone, W. Tong, R. D. Wolfinger, C. E. Mason, W. Jones, J. Dopazo, C. Furlanello, L. Waldron, B. Wang, C. McIntosh, A. Goldenberg, A. Kundaje, C. S. Greene, T. Broderick, M. M. Hoffman, J. T. Leek, K. Korthauer, W. Huber, A. Brazma, J. Pineau, R. Tibshirani, T. Hastie, J. P. A. Ioannidis, J. Quackenbush, and H. J. W. L. Aerts, “Transparency and reproducibility in artificial intelligence,” Nature, vol. 586, no. 7829, pp. E14–E16, 2020.
[146]
M. B. McDermott, S. Wang, N. Marinsek, R. Ranganath, L. Foschini, and M. Ghassemi, “Reproducibility in machine learning for health research: still a ways to go,” Science Translational Medicine, vol. 13, no. 586, 2021.
[147]
O. E. Gundersen and S. Kjensmo, “State of the art: reproducibility in artificial intelligence,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, Washington DC, USA, February, 2018.
[148]
O. E. Gundersen, S. Shamsaliei, and R. J. Isdahl, “Do machine learning platforms provide out-of-the-box reproducibility?” Future Generation Computer Systems, vol. 126, pp. 34–47, 2022.
[149]
J. Wang, J. Jiang, D. Zhang, Y.-Z. Zhang, L. Guo, Y. Jiang, S. Du, and Q. Zhou, “An integrated ai model to improve diagnostic accuracy of ultrasound and output known risk features in suspicious thyroid nodules,” European Radiology, vol. 32, no. 3, pp. 2120–2129, 2022.
[150]
R. Koulu, “Proceduralizing control and discretion: human oversight in artificial intelligence policy,” Maastricht Journal of European and Comparative Law, vol. 27, no. 6, pp. 720–735, 2020.
[151]
I. Garcia-Magarino, R. Muttukrishnan, and J. Lloret, “Human-centric ai for trustworthy iot systems with explainable multilayer perceptrons,” IEEE Access, vol. 7, pp. 125562–125574, 2019.
[152]
R. Fanni, V. E. Steinkogler, G. Zampedri, and J. Pierson, “Enhancing human agency through redress in artificial intelligence systems,” AI & Society, vol. 38, no. 2, pp. 537–547, 2022.
[153]
B. C. Stahl, R. Rodrigues, N. Santiago, and K. Macnish, “A european agency for artificial intelligence: protecting fundamental rights and ethical values,” Computer Law and Security Report, vol. 45, 2022.
[154]
W. Liang, G. A. Tadesse, D. Ho, L. Fei-Fei, M. Zaharia, C. Zhang, and J. Zou, “Advances, challenges and opportunities in creating data for trustworthy ai,” Nature Machine Intelligence, vol. 4, no. 8, pp. 669–677, 2022.
[155]
M. Mora-Cantallops, S. Sánchez-Alonso, E. García-Barriocanal, and M.-A. Sicilia, “Traceability for trustworthy ai: a review of models and tools,” Big Data and Cognitive Computing, vol. 5, no. 2, p. 20, 2021.
[156]
T. Harrison, L. F. Luna-Reyes, T. Pardo, N. De Paula, M. Najafabadi, and J. Palmer, “The data firehose and ai in government: why data management is a key to value and ethics,” in Proceedings of the 20th Annual International Conference on Digital Government Research, pp. 171–176, New York, NY, USA, December 2019.
[157]
L. Alzubaidi, J. Zhang, A. J. Humaidi, A. Al-Dujaili, Y. Duan, O. Al-Shamma, J. Santamaría, M. A. Fadhel, M. Al-Amidie, and L. Farhan, “Review of deep learning: concepts, cnn architectures, challenges, applications, future directions,” Journal of big Data, vol. 8, pp. 53–74, 2021.
[158]
P. Anagnostou, M. Capocasa, N. Milia, E. Sanna, C. Battaggia, D. Luzi, and G. Destro Bisol, “When data sharing gets close to 100%: what human paleogenetics can teach the open science movement,” PLoS One, vol. 10, no. 3, 2015.
[159]
D. Pandove and A. Malhi, “A correlation based recommendation system for large data sets,” Journal of Grid Computing, vol. 19, no. 4, pp. 42–23, 2021.
[160]
S. Chai, W. Chu, Z. Zhang, Z. Li, and M. Z. Abedin, “Dynamic nonlinear connectedness between the green bonds, clean energy, and stock price: the impact of the COVID-19 pandemic,” Annals of Operations Research, vol. 28, 2022.
[161]
L. Alzubaidi, J. Bai, A. Al-Sabaawi, J. Santamaría, A. Albahri, B. S. N. Al-dabbagh, M. A. Fadhel, M. Manoufali, J. Zhang, A. H. Al-Timemy, Y. Duan, A. Abdullah, L. Farhan, Y. Lu, A. Gupta, F. Albu, A. Abbosh, and Y. Gu, “A survey on deep learning tools dealing with data scarcity: definitions, challenges, solutions, tips, and applications,” Journal of Big Data, vol. 10, no. 1, p. 46, 2023.
[162]
Z. Alammar, L. Alzubaidi, J. Zhang, Y. Li, W. Lafta, and Y. Gu, “Deep transfer learning with enhanced feature fusion for detection of abnormalities in x-ray images,” Cancers, vol. 15, p. 4007, 2023.
[163]
A. H. Al-Timemy, L. Alzubaidi, Z. M. Mosa, H. Abdelmotaal, N. H. Ghaeb, A. Lavric, R. M. Hazarbassanov, H. Takahashi, Y. Gu, and S. Yousefi, “A deep feature fusion of improved suspected keratoconus detection with deep learning,” Diagnostics, vol. 13, no. 10, p. 1689, 2023.
[164]
R. I. Hasan, S. M. Yusuf, M. S. Mohd Rahim, and L. Alzubaidi, “Automatic clustering and classification of coffee leaf diseases based on an extended kernel density estimation approach,” Plants, vol. 12, no. 8, p. 1603, 2023.
[165]
M. A. Shyaa, Z. Zainol, R. Abdullah, M. Anbar, L. Alzubaidi, and J. Santamaría, “Enhanced intrusion detection with data stream classification and concept drift guided by the incremental learning genetic programming combiner,” Sensors, vol. 23, no. 7, p. 3736, 2023.
[166]
F. H. Awad, M. M. Hamad, and L. Alzubaidi, “Robust classification and detection of big medical data using advanced parallel k-means clustering, yolov4, and logistic regression,” Life, vol. 13, no. 3, p. 691, 2023.
[167]
S. A. Jebur, K. A. Hussein, H. K. Hoomod, L. Alzubaidi, and J. Santamaría, “Review on deep learning approaches for anomaly event detection in video surveillance,” Electronics, vol. 12, no. 1, p. 29, 2022.
[168]
G. Abbas, A. Mehmood, M. Carsten, G. Epiphaniou, and J. Lloret, “Safety, security and privacy in machine learning based internet of things,” Journal of Sensor and Actuator Networks, vol. 11, no. 3, p. 38, 2022.
[169]
A. Strzelecki and M. Rizun, “Consumers’ change in trust and security after a personal data breach in online shopping,” Sustainability, vol. 14, no. 10, p. 5866, 2022.
[170]
C. Thapa and S. Camtepe, “Precision health data: requirements, challenges and existing techniques for data security and privacy,” Computers in Biology and Medicine, vol. 129, 2021.
[171]
W. Liang, Y. Yang, C. Yang, Y. Hu, S. Xie, K.-C. Li, and J. Cao, “Pdpchain: a consortium blockchain-based privacy protection scheme for personal data,” IEEE Transactions on Reliability, vol. 72, no. 2, pp. 586–598, 2023.
[172]
B. van Giffen, D. Herhausen, and T. Fahse, “Overcoming the pitfalls and perils of algorithms: a classification of machine learning biases and mitigation methods,” Journal of Business Research, vol. 144, pp. 93–106, 2022.
[173]
M.-P. Fernando, F. Cèsar, N. David, and H.-O. José, “Missing the missing values: the ugly duckling of fairness in machine learning,” International Journal of Intelligent Systems, vol. 36, no. 7, pp. 3217–3258, 2021.
[174]
T. F. Kusumasari and R. Fauzi, “Design guidelines and process of metadata management based on data management body of knowledge,” in Proceedings of the 2021 7th International Conference on Information Management (ICIM), pp. 87–91, IEEE, London, UK, March 2021.
[175]
L. R. Kaplan, M. Farooque, D. Sarewitz, and D. Tomblin, “Designing participatory technology assessments: a reflexive method for advancing the public role in science policy decision-making,” Technological Forecasting and Social Change, vol. 171, 2021.
[176]
M. Al-Ruithe, E. Benkhelifa, and K. Hameed, “Data governance taxonomy: cloud versus non-cloud,” Sustainability, vol. 10, no. 1, p. 95, 2018.
[177]
N. Thompson, R. Ravindran, and S. Nicosia, “Government data does not mean data governance: lessons learned from a public sector application audit,” Government Information Quarterly, vol. 32, no. 3, pp. 316–322, 2015.
[178]
T. Usova and R. Laws, “Teaching a one-credit course on data literacy and data visualisation,” Journal of Information Literacy, vol. 15, no. 1, p. 84, 2021.
[179]
L. Edwards and M. Veale, “Enslaving the algorithm: from a “right to an explanation” to a “right to better decisions”,” IEEE Security & Privacy, vol. 16, no. 3, pp. 46–54, 2018.
[180]
L. Ungerer and S. Slade, “Ethical considerations of artificial intelligence in learning analytics in distance education contexts,” in Learning Analytics in Open and Distributed Learning, pp. 105–120, Springer, Berlin, Germany, 2022.
[181]
C. D. Kloos, Y. Dimitriadis, D. Hernández-Leo, C. Alario-Hoyos, A. Martínez-Monés, P. Santos, P. J. Muñoz-Merino, J. I. Asensio-Pérez, and L. V. Safont, “H2o learn-hybrid and human-oriented learning: trustworthy and human-centered learning analytics (tahcla) for hybrid education,” in Proceedings of the 2022 IEEE Global Engineering Education Conference (EDUCON), pp. 94–101, IEEE, Tunis, Tunisia, March 2022.
[182]
L. Wilton, S. Ip, M. Sharma, and F. Fan, “Where is the ai? ai literacy for educators,” in International Conference on Artificial Intelligence in Education, pp. 180–188, Springer, Berlin, Germany, 2022.
[183]
V. A. Gensini, C. Converse, W. S. Ashley, and M. Taszarek, “Machine learning classification of significant tornadoes and hail in the United States using era5 proximity soundings,” Weather and Forecasting, vol. 36, pp. 2143–2160, 2021.
[184]
C. Calvo-Sancho, J. Díaz-Fernández, Y. Martín, P. Bolgiani, M. Sastre, J. J. González-Alemán, D. Santos-Muñoz, J. I. Farrán, and M. L. Martín, “Supercell convective environments in Spain based on era5: hail and non-hail differences,” Weather and Climate Dynamics, vol. 3, no. 3, pp. 1021–1036, 2022.
[185]
A. J. Hill and R. S. Schumacher, “Forecasting excessive rainfall with random forests and a deterministic convection-allowing model,” Weather and Forecasting, vol. 36, pp. 1693–1711, 2021.
[186]
A. McGovern, I. Ebert-Uphoff, D. J. Gagne, and A. Bostrom, “Why we need to focus on developing ethical, responsible, and trustworthy artificial intelligence approaches for environmental science,” Environmental Data Science, vol. 1, p. e6, 2022.
[187]
S. Kantayya, Coded Bias, 7th empire media, London, UK, 2002.
[188]
S. E. Brammer, “Documentary review: coded bias,” Feminist Pedagogy, vol. 2, p. 12, 2022.
[189]
V. Mithal, G. Nayak, A. Khandelwal, V. Kumar, R. Nemani, and N. C. Oza, “Mapping burned areas in tropical forests using a novel machine learning framework,” Remote Sensing, vol. 10, no. 2, p. 69, 2018.
[190]
M. Molinaro and G. Orzes, “From forest to finished products: the contribution of industry 4.0 technologies to the wood sector,” Computers in Industry, vol. 138, 2022.
[191]
A. Khandelwal, A. Karpatne, P. Ravirathinam, R. Ghosh, Z. Wei, H. A. Dugan, P. C. Hanson, and V. Kumar, “Realsat, a global dataset of reservoir and lake surface area variations,” Scientific Data, vol. 9, pp. 356–412, 2022.
[192]
I. Duporge, O. Isupova, S. Reece, D. W. Macdonald, and T. Wang, “Using very-high-resolution satellite imagery and deep learning to detect and count african elephants in heterogeneous landscapes,” Remote Sensing in Ecology and Conservation, vol. 7, no. 3, pp. 369–381, 2021.
[193]
D. Tuia, B. Kellenberger, S. Beery, B. R. Costelloe, S. Zuffi, B. Risse, A. Mathis, M. W. Mathis, F. van Langevelde, T. Burghardt, R. Kays, H. Klinck, M. Wikelski, I. D. Couzin, G. van Horn, M. C. Crofoot, C. V. Stewart, and T. Berger-Wolf, “Perspectives in machine learning for wildlife conservation,” Nature Communications, vol. 13, pp. 792–815, 2022.
[194]
C. Chilson, K. Avery, A. McGovern, E. Bridge, D. Sheldon, and J. Kelly, “Automated detection of bird roosts using nexrad radar data and convolutional neural networks,” Remote Sensing in Ecology and Conservation, vol. 5, no. 1, pp. 20–32, 2019.
[195]
W. Ruan, K. Wu, Q. Chen, and C. Zhang, “Resnet-based bio-acoustics presence detection technology of hainan gibbon calls,” Applied Acoustics, vol. 198, 2022.
[196]
R. M. Rogers, J. Buler, T. Clancy, and H. Campbell, “Repurposing open-source data from weather radars to reduce the costs of aerial waterbird surveys,” Ecological Solutions and Evidence, vol. 3, 2022.
[197]
D. Diochnos, S. Mahloujifar, and M. Mahmoody, “Adversarial risk and robustness: general definitions and implications for the uniform distribution,” Advances in Neural Information Processing Systems, vol. 31, 2022.
[198]
M. S. Pydi and V. Jog, “The many faces of adversarial risk,” Advances in Neural Information Processing Systems, vol. 34, pp. 10000–10012, 2021.
[199]
G. Alicioglu and B. Sun, “A survey of visual analytics for explainable artificial intelligence methods,” Computers & Graphics, vol. 102, pp. 502–520, 2022.
[200]
D. Minh, H. X. Wang, Y. F. Li, and T. N. Nguyen, “Explainable artificial intelligence: a comprehensive review,” Artificial Intelligence Review, vol. 55, no. 5, pp. 3503–3568, 2021.
[201]
G. Vilone and L. Longo, “Notions of explainability and evaluation approaches for explainable artificial intelligence,” Information Fusion, vol. 76, pp. 89–106, 2021.
[202]
J. A. Esterhuizen, B. R. Goldsmith, and S. Linic, “Interpretable machine learning for knowledge generation in heterogeneous catalysis,” Nature catalysis, vol. 5, no. 3, pp. 175–184, 2022.
[203]
Q. Wang, L. T. Tan, R. Q. Hu, and Y. Qian, “Hierarchical energy-efficient mobile-edge computing in iot networks,” IEEE Internet of Things Journal, vol. 7, no. 12, pp. 11626–11639, 2020.
[204]
H. Hu, Q. Wang, R. Q. Hu, and H. Zhu, “Mobility-aware offloading and resource allocation in a mec-enabled iot network with energy harvesting,” IEEE Internet of Things Journal, vol. 8, no. 24, pp. 17541–17556, 2021.
[205]
S. Fu, F. Zhou, and R. Q. Hu, “Resource allocation in a relay-aided mobile edge computing system,” IEEE Internet of Things Journal, vol. 9, no. 23, pp. 23659–23669, 2022.
[206]
H. Xu, P. V. Klaine, O. Onireti, B. Cao, M. Imran, and L. Zhang, “Blockchain-enabled resource management and sharing for 6g communications,” Digital Communications and Networks, vol. 6, no. 3, pp. 261–269, 2020.
[207]
J. Wang, Z. Yan, H. Wang, T. Li, and W. Pedrycz, “A survey on trust models in heterogeneous networks,” IEEE Communications Surveys & Tutorials, vol. 24, no. 4, pp. 2127–2162, 2022.
[208]
S. Taimoor, L. Ferdouse, and W. Ejaz, “Holistic resource management in uav-assisted wireless networks: an optimization perspective,” Journal of Network and Computer Applications, vol. 205, 2022.
[209]
H. Xu, L. Zhang, O. Onireti, Y. Fang, W. J. Buchanan, and M. A. Imran, “Beeptrace: blockchain-enabled privacy-preserving contact tracing for covid-19 pandemic and beyond,” IEEE Internet of Things Journal, vol. 8, no. 5, pp. 3915–3929, 2021.
[210]
R. Kumar, A. A. Khan, J. Kumar, N. A. Golilarz, N. A. Golilarz, S. Zhang, Y. Ting, C. Zheng, and W. Wang, “Blockchain-federated-learning and deep learning models for covid-19 detection using ct imaging,” IEEE Sensors Journal, vol. 21, no. 14, pp. 16301–16314, 2021.
[211]
L. Ricci, D. D. F. Maesa, A. Favenza, and E. Ferro, “Blockchains for covid-19 contact tracing and vaccine support: a systematic review,” IEEE Access, vol. 9, pp. 37936–37950, 2021.
[212]
Q. Zhang, J. Wu, M. Zanella, W. Yang, A. K. Bashir, and W. Fornaciari, “Sema-iiovt: emergent semantic-based trustworthy information-centric fog system and testbed for intelligent internet of vehicles,” IEEE Consumer Electronics Magazine, vol. 12, no. 1, pp. 70–79, 2023.
[213]
M. Abdel-Basset, N. Moustafa, H. Hawash, and W. Ding, “Federated learning for privacy-preserving internet of things,” in Deep Learning Techniques for IoT Security and Privacy, pp. 215–228, Springer, Berlin, Germany, 2022.
[214]
A. Makkar and J. H. Park, “Securecps: cognitive inspired framework for detection of cyber attacks in cyber–physical systems,” Information Processing & Management, vol. 59, no. 3, 2022.
[215]
A. Makkar, U. Ghosh, D. B. Rawat, and J. H. Abawajy, “Fedlearnsp: preserving privacy and security using federated learning and edge computing,” IEEE Consumer Electronics Magazine, vol. 11, no. 2, pp. 21–27, 2022.
[216]
S. Tarikere, I. Donner, and D. Woods, “Diagnosing a healthcare cybersecurity crisis: the impact of iomt advancements and 5g,” Business Horizons, vol. 64, no. 6, pp. 799–807, 2021.
[217]
B. Ghimire and D. B. Rawat, “Secure, privacy preserving and verifiable federating learning using blockchain for internet of vehicles,” IEEE Consumer Electronics Magazine, vol. 11, no. 6, pp. 67–74, 2022.
[218]
L. Malina, G. Srivastava, P. Dzurenda, J. Hajny, and S. Ricci, “A privacy-enhancing framework for internet of things services,” in International Conference on Network and System Security, pp. 77–97, Springer, Berlin, Germany, 2019.
[219]
M. Baza, R. Amer, A. Rasheed, G. Srivastava, M. Mahmoud, and W. Alasmary, “A blockchain-based energy trading scheme for electric vehicles,” in Proceedings of the 2021 IEEE 18th Annual Consumer Communications & Networking Conference (CCNC), pp. 1–7, IEEE, Las Vegas, NV, USA, January 2021.
[220]
R. Jabbar, E. Dhib, A. B. Said, M. Krichen, N. Fetais, E. Zaidan, and K. Barkaoui, “Blockchain technology for intelligent transportation systems: a systematic literature review,” IEEE Access, vol. 10, pp. 20995–21031, 2022.
[221]
S. Khan, F. Luo, Z. Zhang, M. A. Rahim, S. Khan, S. F. Qadri, and K. Wu, “A privacy-preserving and transparent identity management scheme for vehicular social networking,” IEEE Transactions on Vehicular Technology, vol. 71, no. 11, pp. 11555–11570, 2022.
[222]
K. N. Qureshi, L. Shahzad, A. Abdelmaboud, T. A. Elfadil Eisa, B. Alamri, I. T. Javed, A. Al-Dhaqm, and N. Crespi, “A blockchain-based efficient, secure and anonymous conditional privacy-preserving and authentication scheme for the internet of vehicles,” Applied Sciences, vol. 12, no. 1, p. 476, 2022.
[223]
Y. Guo, Z. Wan, H. Cui, X. Cheng, and F. Dressler, “Vehicloak: a blockchain-enabled privacy-preserving payment scheme for location-based vehicular services,” IEEE Transactions on Mobile Computing, vol. 8, pp. 1–13, 2022.
[224]
W. Ahmed, W. Di, and D. Mukathe, “Privacy-preserving blockchain-based authentication and trust management in vanets,” IET Networks, vol. 11, no. 3-4, pp. 89–111, 2022.
[225]
L. T. Tan, R. Q. Hu, and L. Hanzo, “Twin-timescale artificial intelligence aided mobility-aware edge caching and computing in vehicular networks,” IEEE Transactions on Vehicular Technology, vol. 68, no. 4, pp. 3086–3099, 2019.
[226]
J. Liu, M. Ahmed, M. A. Mirza, W. U. Khan, D. Xu, J. Li, A. Aziz, and Z. Han, “Rl/drl meets vehicular task offloading using edge and vehicular cloudlet: a survey,” IEEE Internet of Things Journal, vol. 9, no. 11, pp. 8315–8338, 2022.
[227]
T. Alladi, V. Kohli, V. Chamola, and F. R. Yu, “A deep learning based misbehavior classification scheme for intrusion detection in cooperative intelligent transportation systems,” Digital Communications and Networks, vol. 13, 2022.
[228]
G. Muhammad and M. Alhussein, “Security, trust, and privacy for the internet of vehicles: a deep learning approach,” IEEE Consumer Electronics Magazine, vol. 11, no. 6, pp. 49–55, 2022.
[229]
T. Le and S. Shetty, “Artificial intelligence-aided privacy preserving trustworthy computation and communication in 5g-based iot networks,” Ad Hoc Networks, vol. 126, 2022.
[230]
G. Kumar, M. Lydia, and Y. Levron, “Security challenges in 5g and iot networks: a review,” Secure Communication for 5G and IoT Networks, vol. 9, pp. 1–13, 2022.
[231]
D. N. Molokomme, A. J. Onumanyi, and A. M. Abu-Mahfouz, “Edge intelligence in smart grids: a survey on architectures, offloading models, cyber security measures, and challenges,” Journal of Sensor and Actuator Networks, vol. 11, no. 3, p. 47, 2022.
[232]
K. N. Qureshi, A. Alhudhaif, M. A. Qureshi, and G. Jeon, “Nature-inspired solution for coronavirus disease detection and its impact on existing healthcare systems,” Computers and Electrical Engineering, vol. 95, 2021.
[233]
M. H. Bohara, K. Patel, A. Saiyed, and A. Ganatra, “Adversarial artificial intelligence assistance for secure 5g-enabled iot,” in Blockchain for 5G-Enabled IoT, pp. 323–350, Springer, Berlin, Germany, 2021.
[234]
J. Jenefa and E. Mary Anita, “Secure authentication schemes for vehicular adhoc networks: a survey,” Wireless Personal Communications, vol. 123, no. 1, pp. 31–68, 2022.
[235]
A. Didouh, H. Labiod, Y. E. Hillali, and A. Rivenq, “Blockchain-based collaborative certificate revocation systems using clustering,” IEEE Access, vol. 10, pp. 51487–51500, 2022.
[236]
Z. Lu, G. Qu, and Z. Liu, “A survey on recent advances in vehicular network security, trust, and privacy,” IEEE Transactions on Intelligent Transportation Systems, vol. 20, no. 2, pp. 760–776, 2019.
[237]
A. P. Mdee, M. M. Saad, M. Khan, M. T. R. Khan, and D. Kim, “Impacts of location-privacy preserving schemes on vehicular applications,” Vehicular Communications, vol. 36, 2022.
[238]
X. He, X. Niu, Y. Wang, L. Xiong, Z. Jiang, and C. Gong, “A hierarchical blockchain-assisted conditional privacy-preserving authentication scheme for vehicular ad hoc networks,” Sensors, vol. 22, no. 6, p. 2299, 2022.
[239]
S. Babu and A. Raj Kumar P, “A comprehensive survey on simulators, emulators, and testbeds for vanets,” International Journal of Communication Systems, vol. 35, no. 8, 2022.
[240]
M. Elaryh Makki Dafalla, R. A. Mokhtar, R. A. Saeed, H. Alhumyani, S. Abdel-Khalek, and M. Khayyat, “An optimized link state routing protocol for real-time application over vehicular ad-hoc network,” Alexandria Engineering Journal, vol. 61, no. 6, pp. 4541–4556, 2022.
[241]
A. Nauman, T. N. Nguyen, Y. A. Qadri, Z. Nain, K. Cengiz, and S. W. Kim, “Artificial intelligence in beyond 5g and 6g reliable communications,” IEEE Internet of Things Magazine, vol. 5, no. 1, pp. 73–78, 2022.
[242]
N. M. Elfatih, M. K. Hasan, Z. Kamal, D. Gupta, R. A. Saeed, E. S. Ali, and M. S. Hosain, “Internet of vehicle’s resource management in 5g networks using ai technologies: current status and trends,” IET Communications, vol. 16, no. 5, pp. 400–420, 2022.
[243]
Building Design Construction, “Robotics,” 2022, https://www.bdcnetwork.com/robotics-new-way-demolish-buildings.
[244]
A. Darko, A. P. Chan, M. A. Adabre, D. J. Edwards, M. R. Hosseini, and E. E. Ameyaw, “Artificial intelligence in the aec industry: scientometric analysis and visualization of research activities,” Automation in Construction, vol. 112, 2020.
[245]
Y. Pan and L. Zhang, “Roles of artificial intelligence in construction engineering and management: a critical review and future trends,” Automation in Construction, vol. 122, 2021.
[246]
B. C. Stahl and D. Wright, “Ethics and privacy in ai and big data: implementing responsible research and innovation,” IEEE Security & Privacy, vol. 16, no. 3, pp. 26–33, 2018.
[247]
M. Beltrami, G. Orzes, J. Sarkis, and M. Sartor, “Industry 4.0 and sustainability: towards conceptualization and theory,” Journal of Cleaner Production, vol. 312, 2021.
[248]
B. Wan, C. Xu, R. P. Mahapatra, and P. Selvaraj, “Understanding the cyber-physical system in international stadiums for security in the network from cyber-attacks and adversaries using ai,” Wireless Personal Communications, vol. 127, no. 2, pp. 1207–1224, 2021.
[249]
F. Yuan, E. Klavon, Z. Liu, R. P. Lopez, and X. Zhao, “A systematic review of robotic rehabilitation for cognitive training,” Frontiers in Robotics and AI, vol. 8, 2021.
[250]
H. He, J. Gray, A. Cangelosi, Q. Meng, T. McGinnity, and J. Mehnen, “The challenges and opportunities of artificial intelligence for trustworthy robots and autonomous systems,” in Proceedings of the 2020 3rd International Conference on Intelligent Robotic and Control Engineering (IRCE), pp. 68–74, IEEE, Oxford, UK, April 2020.
[251]
R. E. Stuck, B. E. Holthausen, and B. N. Walker, “The role of risk in human-robot trust,” in Trust in Human-Robot Interaction, pp. 179–194, Elsevier, Amsterdam, Netherlands, 2021.
[252]
P. McAleenan, C. McAleenan, G. Ayers, M. Behm, and Z. Beachem, “The ethics deficit in occupational safety and health monitoring technologies,” Proceedings of the Institution of Civil Engineers-Management, Procurement and Law, vol. 172, no. 3, pp. 93–100, 2019.
[253]
Y. Liu, M. Habibnezhad, and H. Jebelli, “Brainwave-driven human-robot collaboration in construction,” Automation in Construction, vol. 124, 2021.
[254]
J. Garcia, G. Villavicencio, F. Altimiras, B. Crawford, R. Soto, V. Minatogawa, M. Franco, D. Martínez-Muñoz, and V. Yepes, “Machine learning techniques applied to construction: a hybrid bibliometric analysis of advances and future directions,” Automation in Construction, vol. 142, 2022.
[255]
P. Adami, P. B. Rodrigues, P. J. Woods, B. Becerik-Gerber, L. Soibelman, Y. Copur-Gencturk, and G. Lucas, “Effectiveness of vr-based training on improving construction workers’ knowledge, skills, and safety behavior in robotic teleoperation,” Advanced Engineering Informatics, vol. 50, 2021.
[256]
T. Slaton, C. Hernandez, and R. Akhavian, “Construction activity recognition with convolutional recurrent networks,” Automation in Construction, vol. 113, 2020.
[257]
Z. Salih Ageed, S. R M Zeebaree, M. Mohammed Sadeeq, S. Fattah Kak, H. Saeed Yahia, M. R Mahmood, and I. Mahmood Ibrahim, “Comprehensive survey of big data mining approaches in cloud systems,” Qubahan Academic Journal, vol. 1, no. 2, pp. 29–38, 2021.
[258]
C. Hongsong, Z. Yongpeng, C. Yongrui, and B. Bhargava, “Security threats and defensive approaches in machine learning system under big data environment,” Wireless Personal Communications, vol. 117, no. 4, pp. 3505–3525, 2021.
[259]
J. Xing and Z. Zhang, “Hierarchical network security measurement and optimal proactive defense in cloud computing environments,” Security and Communication Networks, vol. 2022, 11 pages, 2022.
[260]
Ž. Turk, B. García de Soto, B. R. Mantha, A. Maciel, and A. Georgescu, “A systemic framework for addressing cybersecurity in construction,” Automation in Construction, vol. 133, 2022.
[261]
J. M. Wing, “Trustworthy ai,” Communications of the ACM, vol. 64, no. 10, pp. 64–71, 2021.
[262]
M. A. Ağca, S. Faye, and D. Khadraoui, “A survey on trusted distributed artificial intelligence,” IEEE Access, vol. 10, pp. 55308–55337, 2022.
[263]
G. Marcus, “The next decade in ai: four steps towards robust artificial intelligence,” 2002, https://arxiv.org/abs/2002.06177.
[264]
Y.-Y. Yang, C. Rashtchian, H. Zhang, R. R. Salakhutdinov, and K. Chaudhuri, “A closer look at accuracy vs. robustness,” Advances in Neural Information Processing Systems, vol. 33, pp. 8588–8601, 2020.
[265]
K. Leino, Z. Wang, and M. Fredrikson, “Globally-robust neural networks,” in International Conference on Machine Learning, pp. 6212–6222, Proceedings of Machine Learning Research, New York, NY, USA, 2021.
[266]
Y. Chen, S. Wang, Y. Qin, X. Liao, S. Jana, and D. Wagner, “Learning security classifiers with verified global robustness properties,” in Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, pp. 477–494, Los Angeles, CA, USA, November 2021.
[267]
R. Hamon, H. Junklewitz, and I. Sanchez, Robustness and Explainability of Artificial Intelligence, Publications Office of the European Union, Brussels, Belgium, 2021.
[268]
M. Wortsman, G. Ilharco, J. W. Kim, M. Li, S. Kornblith, R. Roelofs, R. G. Lopes, H. Hajishirzi, A. Farhadi, and H. Namkoong, “Robust fine-tuning of zero-shot models,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7959–7971, New Orleans, LA, USA, December 2022.
[269]
M. Casadio, E. Komendantskaya, M. L. Daggitt, W. Kokke, G. Katz, G. Amir, and I. Refaeli, “Neural network robustness as a verification property: a principled case study,” in International Conference on Computer Aided Verification, pp. 219–231, Springer, Berlin, Germany, 2022.
[270]
S. F. Alhashmi, S. A. Salloum, and S. Abdallah, “Critical success factors for implementing artificial intelligence (ai) projects in dubai government United Arab Emirates (uae) health sector: applying the extended technology acceptance model (tam),” in International Conference on Advanced Intelligent Systems and Informatics, pp. 393–405, Springer, Berlin, Germany, 2019.
[271]
K. Govindan, “How artificial intelligence drives sustainable frugal innovation: a multitheoretical perspective,” IEEE Transactions on Engineering Management, vol. 87, pp. 1–18, 2022.
[272]
K. E. Henry, R. Kornfield, A. Sridharan, R. C. Linton, C. Groh, T. Wang, A. Wu, B. Mutlu, and S. Saria, “Human–machine teaming is key to ai adoption: clinicians’ experiences with a deployed machine learning system,” NPJ digital medicine, vol. 5, pp. 97–106, 2022.
[273]
T. H. Chang, L. T. Watson, J. Larson, N. Neveu, W. I. Thacker, S. Deshpande, and T. C. Lux, “Algorithm 1028: vtmop: solver for blackbox multiobjective optimization problems,” ACM Transactions on Mathematical Software, vol. 48, no. 3, pp. 1–34, 2022.
[274]
M. Yin, J. Wortman Vaughan, and H. Wallach, “Understanding the effect of accuracy on trust in machine learning models,” in Proceedings of the 2019 Chi Conference on Human Factors in Computing Systems, pp. 1–12, Glasgow, UK, May 2019.
[275]
O. Vereschak, G. Bailly, and B. Caramiaux, “How to evaluate trust in ai-assisted decision making? a survey of empirical methodologies,” Proceedings of the ACM on Human-Computer Interaction, vol. 5, no. 2, pp. 1–39, 2021.
[276]
G. Kaptchuk, D. G. Goldstein, E. Hargittai, J. M. Hofman, and E. M. Redmiles, “How good is good enough? quantifying the impact of benefits, accuracy, and privacy on willingness to adopt covid-19 decision aids,” Digital Threats: Research and Practice, vol. 3, no. 3, pp. 1–18, 2022.
[277]
S. Cai, Z. Ma, M. J. Skibniewski, and S. Bao, “Construction automation and robotics for high-rise buildings over the past decades: a comprehensive review,” Advanced Engineering Informatics, vol. 42, 2019.
[278]
A. Al Rashid, S. A. Khan, S. G Al-Ghamdi, and M. Koc, “Additive manufacturing: technology, applications, markets, and opportunities for the built environment,” Automation in Construction, vol. 118, 2020.
[279]
S. K. Yevu, A. T. W. Yu, A. Darko, and M. N. Addy, “Evaluation model for influences of driving forces for electronic procurement systems application in ghanaian construction projects,” Journal of Construction Engineering and Management, vol. 147, no. 8, 2021.
[280]
Q. K. Jahanger, J. Louis, D. Trejo, and C. Pestana, “Potential influencing factors related to digitalization of construction-phase information management by project owners,” Journal of Management in Engineering, vol. 37, no. 3, 2021.
[281]
G. Ma, M. Wu, Z. Wu, and W. Yang, “Single-shot multibox detector-and building information modeling-based quality inspection model for construction projects,” Journal of Building Engineering, vol. 38, 2021.
[282]
M. Karaz, J. C. Teixeira, and K. M. Rahla, “Construction and demolition waste—a shift toward lean construction and building information model,” in Sustainability and Automation in Smart Constructions, pp. 51–58, Springer, Berlin, Germany, 2021.
[283]
S. O. Abioye, L. O. Oyedele, L. Akanbi, A. Ajayi, J. M. Davila Delgado, M. Bilal, O. O. Akinade, and A. Ahmed, “Artificial intelligence in the construction industry: a review of present status, opportunities and future challenges,” Journal of Building Engineering, vol. 44, 2021.
[284]
M. Regona, T. Yigitcanlar, B. Xia, and R. Y. M. Li, “Opportunities and adoption challenges of ai in the construction industry: a prisma review,” Journal of Open Innovation: Technology, Market, and Complexity, vol. 8, no. 1, p. 45, 2022.
[285]
X. Xu and H. Yang, “Vision measurement of tunnel structures with robust modelling and deep learning algorithms,” Sensors, vol. 20, no. 17, p. 4945, 2020.
[286]
L. M. Dang, H. Wang, Y. Li, Y. Park, C. Oh, T. N. Nguyen, and H. Moon, “Automatic tunnel lining crack evaluation and measurement using deep learning,” Tunnelling and Underground Space Technology, vol. 124, 2022.
[287]
N. Hoch and S. Brad, “Managing ai technologies in earthwork construction: a triz-based innovation approach,” in International TRIZ Future Conference, pp. 3–14, Springer, Berlin, Germany, 2020.
[288]
G. Marcus and E. Davis, Rebooting AI: Building Artificial Intelligence We Can Trust, Vintage, New York, NY, USA, 2019.
[289]
T. Fountaine, B. McCarthy, and T. Saleh, “Building the ai-powered organization,” Harvard Business Review, vol. 97, pp. 62–73, 2019.
[290]
B. F. Malle and D. Ullman, “A multidimensional conception and measure of human-robot trust,” in Trust in Human-Robot Interaction, pp. 3–25, Elsevier, Amsterdam, Netherlands, 2021.
[291]
J. E. Plaks, L. Bustos Rodriguez, and R. Ayad, “Identifying psychological features of robots that encourage and discourage trust,” Computers in Human Behavior, vol. 134, 2022.
[292]
R. Stower, N. Calvo-Barajas, G. Castellano, and A. Kappas, “A meta-analysis on children’s trust in social robots,” International Journal of Social Robotics, vol. 13, no. 8, pp. 1979–2001, 2021.
[293]
D. K. Singh, M. Kumar, E. Fosch-Villaronga, D. Singh, and J. Shukla, “Ethical considerations from child-robot interactions in under-resourced communities,” International Journal of Social Robotics, vol. 882, pp. 1–17, 2022.
[294]
J. Borenstein, A. R. Wagner, and A. Howard, “Overtrust of pediatric health-care robots: a preliminary survey of parent perspectives,” IEEE Robotics and Automation Magazine, vol. 25, no. 1, pp. 46–54, 2018.
[295]
C. A. Miller, “Trust, transparency, explanation, and planning: why we need a lifecycle perspective on human-automation interaction,” in Trust in Human-Robot Interaction, pp. 233–257, Elsevier, Amsterdam, Netherlands, 2021.
[296]
C. Lutz and A. Tamò-Larrieux, “Do privacy concerns about social robots affect use intentions? evidence from an experimental vignette study,” Frontiers in Robotics and AI, vol. 8, 2021.
[297]
J. M. Beer, A. D. Fisk, and W. A. Rogers, “Toward a framework for levels of robot autonomy in human-robot interaction,” Journal of human-robot interaction, vol. 3, no. 2, p. 74, 2014.
[298]
K. Dörfler, G. Dielemans, L. Lachmayer, T. Recker, A. Raatz, D. Lowke, and M. Gerke, “Additive manufacturing using mobile robots: opportunities and challenges for building construction,” Cement and Concrete Research, vol. 158, 2022.
[299]
D. Huang, Q. Chen, J. Huang, S. Kong, and Z. Li, “Customer-robot interactions: understanding customer experience with service robots,” International Journal of Hospitality Management, vol. 99, 2021.
[300]
E. K. Chiou and J. D. Lee, “Trusting automation: designing for responsivity and resilience,” Human Factors, vol. 18, 2021.
[301]
A. Martinho, N. Herber, M. Kroesen, and C. Chorus, “Ethical issues in focus by the autonomous vehicles industry,” Transport Reviews, vol. 41, no. 5, pp. 556–577, 2021.
[302]
B. Shneiderman, “Human-centered artificial intelligence: reliable, safe & trustworthy,” International Journal of Human-Computer Interaction, vol. 36, no. 6, pp. 495–504, 2020.
[303]
B. H. Guo, Y. Zou, Y. Fang, Y. M. Goh, and P. X. Zou, “Computer vision technologies for safety science and management in construction: a critical review and future research directions,” Safety Science, vol. 135, 2021.
[304]
J. Wu, N. Cai, W. Chen, H. Wang, and G. Wang, “Automatic detection of hardhats worn by construction personnel: a deep learning approach and benchmark dataset,” Automation in Construction, vol. 106, 2019.
[305]
N. D. Nath, A. H. Behzadan, and S. G. Paal, “Deep learning for site safety: real-time detection of personal protective equipment,” Automation in Construction, vol. 112, 2020.
[306]
Z. Wang, Y. Zhang, K. M. Mosalam, Y. Gao, and S.-L. Huang, “Deep semantic segmentation for visual understanding on construction sites,” Computer-Aided Civil and Infrastructure Engineering, vol. 37, no. 2, pp. 145–162, 2022.
[307]
C. Brosque, E. Galbally, O. Khatib, and M. Fischer, “Human-robot collaboration in construction: opportunities and challenges,” in Proceedings of the 2020 International Congress on Human-Computer Interaction, Optimization and Robotic Applications (HORA), pp. 1–8, IEEE, Ankara, Turkey, June 2020.
[308]
M. Wu, J.-R. Lin, and X.-H. Zhang, “How human-robot collaboration impacts construction productivity: an agent-based multi-fidelity modeling approach,” Advanced Engineering Informatics, vol. 52, 2022.
[309]
D. Nozaki, K. Okamoto, T. Mochida, X. Qi, Z. Wen, K. Tokuda, T. Sato, and K. Tamesue, “Ai management system to prevent accidents in construction zones using 4k cameras based on 5g network,” in Proceedings of the 2018 21st International Symposium on Wireless Personal Multimedia Communications (WPMC), pp. 462–466, IEEE, Chiang Rai, Thailand, November 2018.
[310]
H. Baker, M. R. Hallowell, and A. J.-P. Tixier, “Ai-based prediction of independent construction safety outcomes from universal attributes,” Automation in Construction, vol. 118, 2020.
[311]
J. C. Augusto and C. D. Nugent, Designing Smart Homes: The Role of Artificial Intelligence, vol. 4008, Springer, Berlin, Germany, 2006.
[312]
V. K. Shukla and B. Singh, “Conceptual framework of smart device for smart home management based on rfid and iot,” in Proceedings of the 2019 Amity International Conference on Artificial Intelligence (AICAI), pp. 787–791, IEEE, Dubai, United Arab Emirates, February 2019.
[313]
C.-J. Liang, X. Wang, V. R. Kamat, and C. C. Menassa, “Human–robot collaboration in construction: classification and research trends,” Journal of Construction Engineering and Management, vol. 147, no. 10, 2021.
[314]
C. Zheyuan, M. A. Rahman, H. Tao, Y. Liu, D. Pengxuan, and Z. M. Yaseen, “Need for developing a security robot-based risk management for emerging practices in the workplace using the advanced human-robot collaboration model,” Work, vol. 68, no. 3, pp. 825–834, 2021.
[315]
D. Hornig, “Optimized safety layouts for fenceless robots,” Technische Universität Braunschweig, Braunschweig, Germany, 2022, Ph.D. Thesis.
[316]
M. Rubagotti, I. Tusseyeva, S. Baltabayeva, D. Summers, and A. Sandygulova, “Perceived safety in physical human–robot interaction—a survey,” Robotics and Autonomous Systems, vol. 151, 2022.
[317]
T. P. Huck, N. Münch, L. Hornung, C. Ledermann, and C. Wurll, “Risk assessment tools for industrial human-robot collaboration: novel approaches and practical needs,” Safety Science, vol. 141, 2021.
[318]
S. You, J.-H. Kim, S. Lee, V. Kamat, and L. P. Robert, “Enhancing perceived safety in human–robot collaborative construction using immersive virtual environments,” Automation in Construction, vol. 96, pp. 161–170, 2018.
[319]
J. M. Davila Delgado, L. Oyedele, A. Ajayi, L. Akanbi, O. Akinade, M. Bilal, and H. Owolabi, “Robotics and automated systems in construction: understanding industry-specific challenges for adoption,” Journal of Building Engineering, vol. 26, 2019.
[320]
T. Kopp, M. Baumgartner, and S. Kinkel, “Success factors for introducing industrial human-robot interaction in practice: an empirically driven framework,” The International Journal of Advanced Manufacturing Technology, vol. 112, no. 3-4, pp. 685–704, 2021.
[321]
R.-J. Halme, M. Lanz, J. Kämäräinen, R. Pieters, J. Latokartano, and A. Hietanen, “Review of vision-based safety systems for human-robot collaboration,” Procedia CIRP, vol. 72, pp. 111–116, 2018.
[322]
S. R. Schepp, J. Thumm, S. B. Liu, and M. Althoff, “Sara: a tool for safe human-robot coexistence and collaboration through reachability analysis,” in Proceedings of the 2022 International Conference on Robotics and Automation (ICRA), pp. 4312–4317, IEEE, Philadelphia, PA, USA, May 2022.
[323]
Y. Xu and H. Bao, “Fintech regulation: evolutionary game model, numerical simulation, and recommendations,” Expert Systems with Applications, vol. 211, 2023.
[324]
X. Vives, The Impact of Fintech on Banking, European Economy, Brussels, Belgium, 2017.
[325]
S. Biswas, B. Carson, V. Chung, S. Singh, and R. Thomas, Ai-bank of the Future: Can banks Meet the Ai challenge, McKinsey & Company, New York, NY, USA, 2020.
[326]
A. Hanif, “Towards explainable artificial intelligence in banking and financial services,” 2022, https://arxiv.org/abs/2112.08441.
[327]
P. Bracke, A. Datta, C. Jung, and S. Sen, “Machine learning explainability in finance: an application to default risk analysis,” 2019, https://www.bankofengland.co.uk/working-paper/2019/machine-learning-explainability-in-finance-an-application-to-default-risk-analysis.
[328]
A. Barredo Arrieta, N. Díaz-Rodríguez, J. Del Ser, A. Bennetot, S. Tabik, A. Barbado, S. García, S. Gil-López, D. Molina, R. Benjamins, R. Chatila, and F. Herrera, “Explainable artificial intelligence (xai): concepts, taxonomies, opportunities and challenges toward responsible ai,” Information Fusion, vol. 58, pp. 82–115, 2020.
[329]
E. Dumitrescu, S. Hué, C. Hurlin, and S. Tokpavi, “Machine learning for credit scoring: improving logistic regression with non-linear decision-tree effects,” European Journal of Operational Research, vol. 297, no. 3, pp. 1178–1192, 2022.
[330]
S. Fritz-Morgenthal, B. Hein, and J. Papenbrock, “Financial risk management and explainable, trustworthy, responsible ai,” Frontiers in Artificial Intelligence, vol. 5, 2022.
[331]
C. Maree, J. E. Modal, and C. W. Omlin, “Towards responsible ai for financial transactions,” in Proceedings of the 2020 IEEE Symposium Series on Computational Intelligence (SSCI), pp. 16–21, IEEE, Canberra, Australia, December 2020.
[332]
S. D. Rosadi, S. Yuniarti, and R. Fauzi, “Protection of data privacy in the era of artificial intelligence in the financial sector in Indonesia,” Journal of Central Banking Law and Institutions, vol. 1, pp. 353–366, 2022.
[333]
J.-H. Chen, Y.-J. Wang, Y.-C. Tsai, and S. Y.-C. Chen, “Financial vision based differential privacy applications,” 2021, https://arxiv.org/abs/2112.14075.
[334]
T. Li, A. K. Sahu, A. Talwalkar, and V. Smith, “Federated learning: challenges, methods, and future directions,” IEEE Signal Processing Magazine, vol. 37, no. 3, pp. 50–60, 2020.
[335]
H. Surendra and H. Mohan, “A review of synthetic data generation methods for privacy preserving data publishing,” International Journal of Scientific & Technology Research, vol. 6, pp. 95–101, 2017.
[336]
R. Max, A. Kriebitz, and C. Von Websky, “Ethical considerations about the implications of artificial intelligence in finance,” Handbook on Ethics in Finance, vol. 18, pp. 577–592, 2021.
[337]
H. Allahabadi, J. Amann, I. Balot, A. Beretta, C. Binkley, J. Bozenhard, F. Bruneault, J. Brusseau, S. Candemir, L. A. Cappellini, S. Chakraborty, N. Cherciu, C. Cociancig, M. Coffee, I. Ek, L. Espinosa-Leal, D. Farina, G. Fieux-Castagnet, T. Frauenfelder, A. Gallucci, G. Giuliani, A. Golda, I. van Halem, E. Hildt, S. Holm, G. Kararigas, S. A. Krier, U. Kuhne, F. Lizzi, V. I. Madai, A. F. Markus, S. Masis, E. W. Mathez, F. Mureddu, E. Neri, W. Osika, M. Ozols, C. Panigutti, B. Parent, F. Pratesi, P. A. Moreno-Sanchez, G. Sartor, M. Savardi, A. Signoroni, H. M. Sormunen, A. Spezzatti, A. Srivastava, A. F. Stephansen, L. B. Theng, J. J. Tithi, J. Tuominen, S. Umbrello, F. Vaccher, D. Vetter, M. Westerlund, R. Wurth, and R. V. Zicari, “Assessing trustworthy ai in times of covid-19. deep learning for predicting a multi-regional score conveying the degree of lung compromise in covid-19 patients,” IEEE Transactions on Technology and Society, vol. 3, no. 4, pp. 272–289, 2022.
[338]
G. Karimian, E. Petelos, and S. M. Evers, “The ethical issues of the application of artificial intelligence in healthcare: a systematic scoping review,” AI and Ethics, vol. 2, no. 4, pp. 539–551, 2022.
[339]
C. González-Gonzalo, E. F. Thee, C. C. Klaver, A. Y. Lee, R. O. Schlingemann, A. Tufail, F. Verbraak, and C. I. Sánchez, “Trustworthy ai: closing the gap between development and integration of ai systems in ophthalmic practice,” Progress in Retinal and Eye Research, vol. 90, 2022.
[340]
Z. H. Khan, A. Siddique, and C. W. Lee, “Robotics utilization for healthcare digitization in global covid-19 management,” International Journal of Environmental Research and Public Health, vol. 17, no. 11, p. 3819, 2020.
[341]
E. J. MacKay and M. D. Stubna, “Understanding basic concepts of supervised machine learning model development in the clinical setting,” Journal of Cardiothoracic and Vascular Anesthesia, vol. 35, no. 8, pp. 2336–2337, 2021.
[342]
M. Braun, P. Hummel, S. Beck, and P. Dabrock, “Primer on an ethics of ai-based decision support systems in the clinic,” Journal of Medical Ethics, vol. 47, no. 12, p. e3, 2021.
[343]
American Medical Association, Code of Medical Ethics, American Medical Association, Philadelphia, PA, USA, 1848.
[344]
J. D. Shahidullah, C. A. Hostutler, and S. G. Forman, “Ethical considerations in medication-related roles for pediatric primary care psychologists,” Clinical Practice in Pediatric Psychology, vol. 7, no. 4, pp. 405–416, 2019.
[345]
A. Sheikh, M. Anderson, S. Albala, B. Casadei, B. D. Franklin, M. Richards, D. Taylor, H. Tibble, and E. Mossialos, “Health information technology and digital innovation for national learning health and care systems,” The Lancet Digital Health, vol. 3, no. 6, pp. e383–e396, 2021.
[346]
N. A. Smuha, “The eu approach to ethics guidelines for trustworthy artificial intelligence,” Computer Law Review International, vol. 20, no. 4, pp. 97–106, 2019.
[347]
T. Gundersen and K. Bærøe, “The future ethics of artificial intelligence in medicine: making sense of collaborative models,” Science and Engineering Ethics, vol. 28, no. 2, pp. 17–16, 2022.
[348]
L. Vesnic-Alujevic, S. Nascimento, and A. Polvora, “Societal and ethical impacts of artificial intelligence: critical notes on european policy frameworks,” Telecommunications Policy, vol. 44, no. 6, 2020.
[349]
X. Liu, S. C. Rivera, D. Moher, M. J. Calvert, and A. K. Denniston, “Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the consort-ai extension,” BMJ: British Medical Journal, vol. 370, 2020.
[350]
P. Rajpurkar, E. Chen, O. Banerjee, and E. J. Topol, “Ai in health and medicine,” Nature Medicine, vol. 28, no. 1, pp. 31–38, 2022.
[351]
A. Esteva, K. Chou, S. Yeung, N. Naik, A. Madani, A. Mottaghi, Y. Liu, E. Topol, J. Dean, and R. Socher, “Deep learning-enabled medical computer vision,” NPJ digital medicine, vol. 4, pp. 5–9, 2021.
[352]
K. Cusi, S. Isaacs, D. Barb, R. Basu, S. Caprio, W. T. Garvey, S. Kashyap, J. I. Mechanick, M. Mouzaki, K. Nadolsky, M. E. Rinella, M. B. Vos, and Z. Younossi, “American association of clinical endocrinology clinical practice guideline for the diagnosis and management of nonalcoholic fatty liver disease in primary care and endocrinology clinical settings: co-sponsored by the american association for the study of liver diseases (aasld),” Endocrine Practice, vol. 28, no. 5, pp. 528–562, 2022.
[353]
A. D. Pearson, C. Rossig, C. Mackall, N. N. Shah, A. Baruchel, G. Reaman, R. Ricafort, D. Heenen, A. Bassan, M. Berntgen, N. Bird, E. Bleickardt, N. Bouchkouj, P. Bross, C. Brownstein, S. B. Cohen, T. de Rojas, L. Ehrlich, E. Fox, S. Gottschalk, L. Hanssens, D. S. Hawkins, I. D. Horak, D. H. Taylor, C. Johnson, D. Karres, F. Ligas, D. Ludwinski, M. Mamonkin, L. Marshall, B. K. Masouleh, Y. Matloub, S. Maude, J. McDonough, V. Minard-Colin, K. Norga, K. Nysom, A. Pappo, L. Pearce, R. Pieters, M. Pule, A. Quintás-Cardama, N. Richardson, M. Schüßler-Lenz, N. Scobie, M. A. Sersch, M. A. Smith, J. Sterba, S. K. Tasian, B. Weigel, S. L. Weiner, C. M. Zwaan, G. Lesa, and G. Vassal, “Paediatric strategy forum for medicinal product development of chimeric antigen receptor t-cells in children and adolescents with cancer: accelerate in collaboration with the european medicines agency with participation of the food and drug administration,” European Journal of Cancer, vol. 160, pp. 112–133, 2022.
[354]
A. Fiocchi, R. Pawankar, C. Cuello-Garcia, K. Ahn, S. Al-Hammadi, A. Agarwal, K. Beyer, W. Burks, G. W. Canonica, M. Ebisawa, S. Gandhi, R. Kamenwa, B. W. Lee, H. Li, S. Prescott, J. J. Riva, L. Rosenwasser, H. Sampson, M. Spigler, L. Terracciano, A. Vereda-Ortiz, S. Waserman, J. J. Yepes-Nuñez, J. L. Brożek, and H. J. Schünemann, “World allergy organization-mcmaster university guidelines for allergic disease prevention (glad-p): probiotics,” World Allergy Organization Journal, vol. 8, pp. 4–13, 2015.
[355]
I. S. W. Group, “Software as a medical device”: possible framework for risk categorization and corresponding considerations,” in International Medical Device Regulators Forum, Springer, Berlin, Germany, 2014.
[356]
A. Adeyemo, M. K. Balaconis, D. R. Darnes, S. Fatumo, P. Granados Moreno, C. J. Hodonsky, M. Inouye, M. Kanai, K. Kato, B. M. Knoppers, A. C. F. Lewis, A. R. Martin, M. I. McCarthy, M. N. Meyer, Y. Okada, J. B. Richards, L. Richter, S. Ripatti, C. N. Rotimi, S. C. Sanderson, A. C. Sturm, R. A. Verdugo, E. Widen, C. J. Willer, G. L. Wojcik, and A. Zhou, “Responsible use of polygenic risk scores in the clinic: potential benefits, risks and gaps,” Nature Medicine, vol. 27, no. 11, pp. 1876–1884, 2021.
[357]
P. Galetsi, K. Katsaliaki, and S. Kumar, “Exploring benefits and ethical challenges in the rise of mhealth (mobile healthcare) technology for the common good: an analysis of mobile applications for health specialists,” Technovation, vol. 121, 2023.
[358]
E. Fosch-Villaronga, H. Drukarch, P. Khanna, T. Verhoef, and B. Custers, “Accounting for diversity in ai for medicine,” Computer Law and Security Report, vol. 47, 2022.
[359]
M. DeCamp and C. Lindvall, “Latent bias and the implementation of artificial intelligence in medicine,” Journal of the American Medical Informatics Association, vol. 27, no. 12, pp. 2020–2023, 2020.
[360]
G. Yang, Q. Ye, and J. Xia, “Unbox the black-box for the medical explainable ai via multi-modal and multi-centre data fusion: a mini-review, two showcases and beyond,” Information Fusion, vol. 77, pp. 29–52, 2022.
[361]
E. Ntoutsi, P. Fafalios, U. Gadiraju, V. Iosifidis, W. Nejdl, M.-E. Vidal, S. Ruggieri, F. Turini, S. Papadopoulos, E. Krasanakis, I. Kompatsiaris, K. Kinder‐Kurlanda, C. Wagner, F. Karimi, M. Fernandez, H. Alani, B. Berendt, T. Kruegel, C. Heinze, K. Broelemann, G. Kasneci, T. Tiropanis, and S. Staab, “Bias in data-driven artificial intelligence systems—an introductory survey,” WIREs Data Mining and Knowledge Discovery, vol. 10, no. 3, 2020.
[362]
K. Ahmad, M. Maabreh, M. Ghaly, K. Khan, J. Qadir, and A. Al-Fuqaha, “Developing future human-centered smart cities: critical analysis of smart city security, data management, and ethical challenges,” Computer Science Review, vol. 43, 2022.
[363]
R. B. Parikh, S. Teeple, and A. S. Navathe, “Addressing bias in artificial intelligence in health care,” JAMA, vol. 322, no. 24, pp. 2377–2378, 2019.
[364]
C. Meske, E. Bunde, J. Schneider, and M. Gersch, “Explainable artificial intelligence: objectives, stakeholders, and future research opportunities,” Information Systems Management, vol. 39, no. 1, pp. 53–63, 2022.
[365]
S. Faghani, B. Khosravi, K. Zhang, M. Moassefi, J. M. Jagtap, F. Nugen, S. Vahdati, S. P. Kuanar, S. M. Rassoulinejad-Mousavi, Y. Singh, D. V. Vera Garcia, P. Rouzrokh, and B. J. Erickson, “Mitigating bias in radiology machine learning: 3. performance metrics,” Radiology: Artificial Intelligence, vol. 4, no. 5, 2022.
[366]
D. A. Vyas, L. G. Eisenstein, and D. S. Jones, “Hidden in plain sight—reconsidering the use of race correction in clinical algorithms,” New England Journal of Medicine, vol. 383, no. 9, pp. 874–882, 2020.
[367]
A. Khan, M. C. Turchin, A. Patki, V. Srinivasasainagendra, N. Shang, R. Nadukuru, A. C. Jones, E. Malolepsza, O. Dikilitas, I. J. Kullo, D. J. Schaid, E. Karlson, T. Ge, J. B. Meigs, J. W. Smoller, C. Lange, D. R. Crosslin, G. P. Jarvik, P. K. Bhatraju, J. N. Hellwege, P. Chandler, L. R. Torvik, A. Fedotov, C. Liu, C. Kachulis, N. Lennon, N. S. Abul-Husn, J. H. Cho, I. Ionita-Laza, A. G. Gharavi, W. K. Chung, G. Hripcsak, C. Weng, G. Nadkarni, M. R. Irvin, H. K. Tiwari, E. E. Kenny, N. A. Limdi, and K. Kiryluk, “Genome-wide polygenic score to predict chronic kidney disease across ancestries,” Nature Medicine, vol. 28, no. 7, pp. 1412–1420, 2022.
[368]
B. Mittelstadt, “Principles alone cannot guarantee ethical ai,” Nature Machine Intelligence, vol. 1, no. 11, pp. 501–507, 2019.
[369]
A. Felländer, J. Rebane, S. Larsson, M. Wiggberg, and F. Heintz, “Achieving a data-driven risk assessment methodology for ethical ai,” Digital Society, vol. 1, no. 2, pp. 13–27, 2022.
[370]
S. R. Pfohl, A. Foryciarz, and N. H. Shah, “An empirical characterization of fair machine learning for clinical risk prediction,” Journal of Biomedical Informatics, vol. 113, 2021.
[371]
L. L. Guo, S. R. Pfohl, J. Fries, A. E. Johnson, J. Posada, C. Aftandilian, N. Shah, and L. Sung, “Evaluation of domain generalization and adaptation on improving model robustness to temporal dataset shift in clinical medicine,” Scientific Reports, vol. 12, pp. 2726–2810, 2022.
[372]
G. Mudd-Martin, A. L. Cirino, V. Barcelona, K. Fox, M. Hudson, Y. V. Sun, J. Y. Taylor, and V. A. Cameron, “Considerations for cardiovascular genetic and genomic research with marginalized racial and ethnic groups and indigenous peoples: a scientific statement from the American heart association,” Circulation: Genomic and Precision Medicine, vol. 14, no. 4, 2021.
[373]
M. Livingston, “Preventing racial bias in federal ai,” JSPG, vol. 16, no. 2, 2020.
[374]
A. H. Sham, K. Aktas, D. Rizhinashvili, D. Kuklianov, F. Alisinanoglu, I. Ofodile, C. Ozcinar, and G. Anbarjafari, “Ethical ai in facial expression analysis: racial bias,” Signal, Image and Video Processing, vol. 17, no. 2, pp. 399–406, 2022.
[375]
V. Baxi, R. Edwards, M. Montalto, and S. Saha, “Digital pathology and artificial intelligence in translational medicine and clinical practice,” Modern Pathology, vol. 35, no. 1, pp. 23–32, 2022.
[376]
R. Benjamin, “Race after technology: abolitionist tools for the new jim code,” Social Forces, vol. 98, no. 4, pp. 1–3, 2020.
[377]
European Group on Ethics in Science and New Technologies, Statement on Artificial Intelligence, Robotics And’autonomous’ Systems: Brussels, EU: European Union, Brussels, Belgium, 2018.
[378]
A. Jobin, M. Ienca, and E. Vayena, “The global landscape of ai ethics guidelines,” Nature Machine Intelligence, vol. 1, no. 9, pp. 389–399, 2019.
[379]
E. Tjoa and C. Guan, “A survey on explainable artificial intelligence (xai): toward medical xai,” IEEE Transactions on Neural Networks and Learning Systems, vol. 32, no. 11, pp. 4793–4813, 2021.
[380]
J. E. Zini and M. Awad, “On the explainability of natural language processing deep models,” ACM Computing Surveys, vol. 55, no. 5, pp. 1–31, 2023.
[381]
D. Mahapatra, A. Poellinger, and M. Reyes, “Interpretability-Guided inductive bias for deep learning based medical image,” Medical Image Analysis, vol. 81, 2022.
[382]
E. S. Ho and Z. Ding, “Electrocardiogram analysis of post-stroke elderly people using one-dimensional convolutional neural network model with gradient-weighted class activation mapping,” Artificial Intelligence in Medicine, vol. 130, 2022.
[383]
B. Jiang, Y. Zhang, L. Zhang, G. H de Bock, R. Vliegenthart, and X. Xie, “Human-recognizable ct image features of subsolid lung nodules associated with diagnosis and classification by convolutional neural networks,” European Radiology, vol. 31, no. 10, pp. 7303–7315, 2021.
[384]
L.-V. Herm, K. Heinrich, J. Wanner, and C. Janiesch, “Stop ordering machine learning algorithms by their explainability! a user-centered investigation of performance and explainability,” International Journal of Information Management, vol. 69, 2023.
[385]
L. Alzubaidi, O. Al-Shamma, M. A. Fadhel, L. Farhan, J. Zhang, and Y. Duan, “Optimizing the performance of breast cancer classification by employing the same domain transfer learning from hybrid deep convolutional neural network model,” Electronics, vol. 9, no. 3, p. 445, 2020.
[386]
A. Deshpande and H. Sharp, “Responsible ai systems: who are the stakeholders?” in Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society, pp. 227–236, Oxford, UK, May 2022.
[387]
K. Liu and D. Tao, “The roles of trust, personalization, loss of privacy, and anthropomorphism in public acceptance of smart healthcare services,” Computers in Human Behavior, vol. 127, 2022.
[388]
C. Pelau, D.-C. Dabija, and I. Ene, “What makes an ai device human-like? the role of interaction quality, empathy and perceived psychological anthropomorphic characteristics in the acceptance of artificial intelligence in the service industry,” Computers in Human Behavior, vol. 122, 2021.
[389]
V. Venkatesh, J. Y. Thong, and X. Xu, “Consumer acceptance and use of information technology: extending the unified theory of acceptance and use of technology,” MIS Quarterly, vol. 36, no. 1, pp. 157–178, 2012.
[390]
P. R. Daugherty and H. J. Wilson, Human+ Machine: Reimagining Work in the Age of AI, Harvard Business Press, Harvard, MA, USA, 2018.
[391]
N. Haefner, J. Wincent, V. Parida, and O. Gassmann, “Artificial intelligence and innovation management: a review, framework, and research agenda,” Technological Forecasting and Social Change, vol. 162, 2021.
[392]
X. Wang and R. Zhou, “Impacts of user expectation and disconfirmation on satisfaction and behavior intention: the moderating effect of expectation levels,” International Journal of Human-Computer Interaction, vol. 39, no. 15, pp. 3127–3140, 2022.
[393]
Q. Yang, A. Scuito, J. Zimmerman, J. Forlizzi, and A. Steinfeld, “Investigating how experienced ux designers effectively work with machine learning,” in Proceedings of the 2018 Designing Interactive Systems Conference, pp. 585–596, Hong Kong, China, June 2018.
[394]
L. Alzubaidi, M. A. Fadhel, O. Al-Shamma, J. Zhang, J. Santamaría, Y. Duan, and S. R Oleiwi, “Towards a better understanding of transfer learning for medical imaging: a case study,” Applied Sciences, vol. 10, no. 13, p. 4523, 2020.
[395]
H. Subramonyam, C. Seifert, and E. Adar, “Towards a process model for co-creating ai experiences,” 2021, https://arxiv.org/abs/2104.07595.
[396]
A. Rechkemmer and M. Yin, “When confidence meets accuracy: exploring the effects of multiple performance indicators on trust in machine learning models,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, pp. 1–14, New Orleans, LA, USA, April 2022.
[397]
B. Thuraisingham, “Trustworthy machine learning,” IEEE Intelligent Systems, vol. 37, no. 1, pp. 21–24, 2022.
[398]
D. Llorente, M. Ballesteros, I. D. J. S. Ramos, and J. I. C. Oria, “Deep learning adapted to differential neural networks used as pattern classification of electrophysiological signals,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 9, 2022.
[399]
S. A. Jebur, K. A. Hussein, H. K. Hoomod, and L. Alzubaidi, “Novel deep feature fusion framework for multi-scenario violence detection,” Computers, vol. 12, no. 9, p. 175, 2023.
[400]
R. M. Hazarbassanov, L. Al-Zubaidi, Z. M. Mosa, H. A. Mutal, A. Lavric, H. Takahashi, T. Taneri, S. Yousefi, and A. Al-Timemy, “The suitability of color histogram-based features for keratoconus detection from corneal thickness with and neural networks,” Investigative Ophthalmology and Visual Science, vol. 64, p. 1089, 2023.
[401]
L. Alzubaidi, Y. Duan, A. Al-Dujaili, I. K. Ibraheem, A. H. Alkenani, J. Santamaría, M. A. Fadhel, O. Al-Shamma, and J. Zhang, “Deepening into the suitability of using pre-trained models of imagenet against a lightweight convolutional neural network in medical imaging: an experimental study,” PeerJ Computer Science, vol. 7, p. e715, 2021.
[402]
L. Alzubaidi, M. Al-Amidie, A. Al-Asadi, A. J. Humaidi, O. Al-Shamma, M. A. Fadhel, J. Zhang, J. Santamaría, and Y. Duan, “Novel transfer learning approach for medical imaging with limited labeled data,” Cancers, vol. 13, no. 7, p. 1590, 2021.

Cited By

View all
  • (2024)Benchmarking Instance-Centric Counterfactual Algorithms for XAI: From White Box to Black BoxACM Computing Surveys10.1145/3672553Online publication date: 12-Jun-2024
  • (2024)Towards Trustworthy AI Engineering - A Case Study on integrating an AI audit catalog into MLOps processesProceedings of the 2nd International Workshop on Responsible AI Engineering10.1145/3643691.3648584(1-7)Online publication date: 16-Apr-2024
  • (2024)Intelligent systems in healthcareComputers in Biology and Medicine10.1016/j.compbiomed.2024.108908180:COnline publication date: 18-Nov-2024

Index Terms

  1. Towards Risk-Free Trustworthy Artificial Intelligence: Significance and Requirements
          Index terms have been assigned to the content through auto-classification.

          Recommendations

          Comments

          Please enable JavaScript to view thecomments powered by Disqus.

          Information & Contributors

          Information

          Published In

          cover image International Journal of Intelligent Systems
          International Journal of Intelligent Systems  Volume 2023, Issue
          2023
          3189 pages
          This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

          Publisher

          John Wiley and Sons Ltd.

          United Kingdom

          Publication History

          Published: 01 January 2023

          Qualifiers

          • Review-article

          Contributors

          Other Metrics

          Bibliometrics & Citations

          Bibliometrics

          Article Metrics

          • Downloads (Last 12 months)0
          • Downloads (Last 6 weeks)0
          Reflects downloads up to 22 Dec 2024

          Other Metrics

          Citations

          Cited By

          View all
          • (2024)Benchmarking Instance-Centric Counterfactual Algorithms for XAI: From White Box to Black BoxACM Computing Surveys10.1145/3672553Online publication date: 12-Jun-2024
          • (2024)Towards Trustworthy AI Engineering - A Case Study on integrating an AI audit catalog into MLOps processesProceedings of the 2nd International Workshop on Responsible AI Engineering10.1145/3643691.3648584(1-7)Online publication date: 16-Apr-2024
          • (2024)Intelligent systems in healthcareComputers in Biology and Medicine10.1016/j.compbiomed.2024.108908180:COnline publication date: 18-Nov-2024

          View Options

          View options

          Media

          Figures

          Other

          Tables

          Share

          Share

          Share this Publication link

          Share on social media