Abstract
Artificial Intelligence systems are characterized by always less interactions with humans today, leading to autonomous decision-making processes. In this context, erroneous predictions can have severe consequences. As a solution, we design and develop a set of methods derived from eXplainable AI models. The aim is to define “safety regions” in the feature space where false negatives (e.g., in a mobility scenario, prediction of no collision, but collision in reality) tend to zero. We test and compare the proposed algorithms on two different datasets (physical fatigue and vehicle platooning) and achieve quite different conclusions in terms of results that strongly depend on the level of noise in the dataset rather than on the algorithms at hand.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Adebayo, J., et al.: Sanity checks for saliency maps. arXiv preprint arXiv:1810.03292 (2018)
Arrieta, A.B., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020)
Balasubramanian, V.N., Ho, S., Vovk, V.: Conformal Prediction for Reliable Machine Learning, 1st edn. Morgan Kaufmann Elsevier (2014)
Becker, U.: Increasing safety of neural networks in medical devices. In: Romanovsky, A., Troubitsyna, E., Gashi, I., Schoitsch, E., Bitsch, F. (eds.) SAFECOMP 2019. LNCS, vol. 11699, pp. 127–136. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-26250-1_10
Campagner, A., Cabitza, F., Ciucci, D.: Three-way decision for handling uncertainty in machine learning: a narrative review. In: Bello, R., Miao, D., Falcon, R., Nakata, M., Rosete, A., Ciucci, D. (eds.) IJCRS 2020. LNCS (LNAI), vol. 12179, pp. 137–152. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-52705-1_10
Cangelosi, D., et al.: Logic learning machine creates explicit and stable rules stratifying neuroblastoma patients. BMC Bioinform. 14(7), 1–20 (2013)
Cheng, C.H., et al.: Towards dependability metrics for neural networks (2018)
Clavière, A., Asselin, E., Garion, C., Pagetti, C.: Safety verification of neural network controlled systems. arXiv preprint arXiv:2011.05174 (2020)
Cluzeau, J., et al.: Concepts of design assurance for neural networks CoDANN. Standard, European Union Aviation Safety Agency, Daedalean, AG, March 2020. https://www.easa.europa.eu/sites/default/files/dfu/EASA-DDLN-Concepts-of-Design-Assurance-for-Neural-Networks-CoDANN.pdf
Cortes, C., et al.: Boosting with abstention. In: Lee, D., Sugiyama, M., Luxburg, U., Guyon, I., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 29. Curran Associates, Inc. (2016). https://proceedings.neurips.cc/paper/2016/file/7634ea65a4e6d9041cfd3f7de18e334a-Paper.pdf
Czarnecki, K., Salay, R.: Towards a framework to manage perceptual uncertainty for safe automated driving. In: Gallina, B., Skavhaug, A., Schoitsch, E., Bitsch, F. (eds.) SAFECOMP 2018. LNCS, vol. 11094, pp. 439–445. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-99229-7_37
Eaton-Rosen, Z., Bragman, F., Bisdas, S., Ourselin, S., Cardoso, M.J.: Towards safe deep learning: accurately quantifying biomarker uncertainty in neural network predictions. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11070, pp. 691–699. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00928-1_78
Gehr, T., et al.: AI2: safety and robustness certification of neural networks with abstract interpretation. In: 2018 IEEE Symposium on Security and Privacy (SP), pp. 3–18. IEEE (2018)
Gordon, L., et al.: Explainable artificial intelligence for safe intraoperative decision support. JAMA Surg. 154(11), 1064–1065 (2019)
Gu, X., Easwaran, A.: Towards safe machine learning for CPS: infer uncertainty from training data (2019)
Guo, C., et al.: On calibration of modern neural networks. In: International Conference on Machine Learning, pp. 1321–1330. PMLR (2017)
Hendrycks, D., Dietterich, T.: Benchmarking neural network robustness to common corruptions and perturbations (2019)
Holzinger, A., et al.: What do we need to build explainable AI systems for the medical domain? (2017)
Isele, D., et al.: Safe reinforcement learning on autonomous vehicles. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1–6. IEEE (2018)
ISO/IEC: Standardization in the area of artificial intelligence. Standard, ISO/IEC, Washington, DC 20036, USA (Creation date 2017). https://www.iso.org/committee/6794475.html
Koshiyama, A., et al.: Towards algorithm auditing: a survey on managing legal, ethical and technological risks of AI, ML and associated algorithms. SSRN Electron. J. (2021)
Lakshminarayanan, B., et al.: Simple and scalable predictive uncertainty estimation using deep ensembles. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 6405–6416 (2016)
Madhavan, R., et al.: Toward trustworthy and responsible artificial intelligence policy development. IEEE Intell. Syst. 35(5), 103–108 (2020)
Maman, Z.S., et al.: A data analytic framework for physical fatigue management using wearable sensors. Expert Syst. Appl. 155, 113405 (2020)
Mohseni, S., et al.: Practical solutions for machine learning safety in autonomous vehicles. arXiv preprint arXiv:1912.09630 (2019)
Mongelli, M., Muselli, M., Ferrari, E.: Achieving zero collision probability in vehicle platooning under cyber attacks via machine learning. In: 2019 4th International Conference on System Reliability and Safety (ICSRS), pp. 41–45. IEEE (2019)
Mongelli, M., Ferrari, E., Muselli, M., Fermi, A.: Performance validation of vehicle platooning through intelligible analytics. IET Cyber-Phys. Syst. Theory Appl. 4(2), 120–127 (2019)
Mongelli, M., Muselli, M., Scorzoni, A., Ferrari, E.: Accellerating prism validation of vehicle platooning through machine learning. In: 2019 4th International Conference on System Reliability and Safety (ICSRS), pp. 452–456. IEEE (2019)
Maurizio, M., Vanessa, O.: Stability certification of dynamical systems: lyapunov logic learning machine. In: Thampi, S.M., Lloret Mauri, J., Fernando, X., Boppana, R., Geetha, S., Sikora, A. (eds.) Applied Soft Computing and Communication Networks. LNCS, vol. 187, pp. 221–235. (2021). https://doi.org/10.1007/978-981-33-6173-7_15
Muselli, M.: Switching neural networks: a new connectionist model for classification (2005)
Parodi, S., et al.: Differential diagnosis of pleural mesothelioma using logic learning machine. BMC Bioinform. 16(9), 1–10 (2015)
Parodi, S., et al.: Logic learning machine and standard supervised methods for Hodgkin’s lymphoma prognosis using gene expression data and clinical variables. Health Inform. J. 24(1), 54–65 (2018)
Pereira, A., Thomas, C.: Challenges of machine learning applied to safety-critical cyber-physical systems. Mach. Learn. Knowl. Extr. 2(4), 579–602 (2020)
Samek, W., et al.: Explainable artificial intelligence: understanding, visualizing and interpreting deep learning models. ITU J.: ICT Discoveries - Special Issue 1 - The Impact of Artificial Intelligence (AI) on Communication Networks and Services 1, 1–10 (2017)
Saranti, A., Taraghi, B., Ebner, M., Holzinger, A.: Property-based testing for parameter learning of probabilistic graphical models. In: Holzinger, A., Kieseberg, P., Tjoa, A.M., Weippl, E. (eds.) CD-MAKE 2020. LNCS, vol. 12279, pp. 499–515. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-57321-8_28
Schwalbe, G., Schels, M.: A survey on methods for the safety assurance of machine learning based systems. In: 10th European Congress on Embedded Real Time Software and Systems (ERTS 2020) (2020)
Seshia, S.A., et al.: Formal specification for deep neural networks. In: Lahiri, S.K., Wang, C. (eds.) ATVA 2018. LNCS, vol. 11138, pp. 20–34. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01090-4_2
International Organization for Standardization: Road vehicles safety of the intended functionality PD ISO PAS 21448:2019. Standard, International Organization for Standardization, Geneva, CH, March 2019
Sun, Y., et al.: Structural test coverage criteria for deep neural networks. In: 2019 IEEE/ACM 41st International Conference on Software Engineering: Companion Proceedings (ICSE-Companion), pp. 1–23. ACM New York (2019)
Varshney, K.R.: Engineering safety in machine learning. In: 2016 Information Theory and Applications Workshop (ITA), pp. 1–5. IEEE (2016)
Wiener, Y., El-Yaniv, R.: Agnostic pointwise-competitive selective classification. J. Artif. Int. Res. 52(1), 179–201 (2015)
Williams, N.: The Borg rating of perceived exertion (RPE) scale. Occup. Med. 67(5), 404–405 (2017)
Zhang, X., et al.: DADA: deep adversarial data augmentation for extremely low data regime classification. IEEE Trans. Circuits Syst. Video Technol. 2807–2811 (2019)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 IFIP International Federation for Information Processing
About this paper
Cite this paper
Narteni, S., Ferretti, M., Orani, V., Vaccari, I., Cambiaso, E., Mongelli, M. (2021). From Explainable to Reliable Artificial Intelligence. In: Holzinger, A., Kieseberg, P., Tjoa, A.M., Weippl, E. (eds) Machine Learning and Knowledge Extraction. CD-MAKE 2021. Lecture Notes in Computer Science(), vol 12844. Springer, Cham. https://doi.org/10.1007/978-3-030-84060-0_17
Download citation
DOI: https://doi.org/10.1007/978-3-030-84060-0_17
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-84059-4
Online ISBN: 978-3-030-84060-0
eBook Packages: Computer ScienceComputer Science (R0)