Abstract
Thoracic problems are medical conditions that affect the area behind the sternum and include the heart, lungs, trachea, bronchi, esophagus and other structures of the respiratory and cardiovascular system. These problems can be caused by a variety of conditions, such as respiratory infections, lung conditions, heart conditions, autoimmune diseases, or anxiety disorders, and can vary in symptoms and severity. In this paper, we introduce a supervised neural network model that is trained to predict these problems and to further more increase the level of accuracy by using explainability methods. We chose to use the attention mechanism to be able to get a higher weight after training the data set. The accuracy of the trained model reached the value of more than 80%. To be able to analyze and explain each feature, we use Local Interpretable Model-Agnostic Explanations, which is a post-hoc model agnostic technique. Our experiments showed that by using explainability results as feedback signal, we were able to increase the accuracy of the base model with more than 20% on a small medical dataset.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Niu, Z., Zhong, G., Yu, H.: A review on the attention mechanism of deep learning. Neurocomputing 452, 48–62 (2021)
Santurkar, S., et al.: How does batch normalization help optimization? In: Advances in Neural Information Processing Systems, vol. 31 (2018)
Hu, R., et al.: Efficient hardware architecture of softmax layer in deep neural network. In: 2018 IEEE 23rd International Conference on Digital Signal Processing (DSP). IEEE (2018)
Lee, E., et al.: Developing the sensitivity of LIME for better machine learning explanation. In: Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications, vol. 11006. SPIE (2019)
Bhattacharya, A.: Applied Machine Learning Explainability Techniques: Make ML models explainable and trustworthy for practical applications using LIME, SHAP, and more. Packt Publishing Ltd. (2022)
Garreau, D., Luxburg, U.: Explaining the explainer: a first theoretical analysis of LIME. In: International Conference on Artificial Intelligence and Statistics. PMLR (2020)
Zafar, M.R., Khan, N.M.: DLIME: a deterministic local interpretable model-agnostic explanations approach for computer-aided diagnosis systems. arXiv preprint arXiv:1906.10263 (2019)
Palatnik de Sousa, I., Maria Bernardes Rebuzzi Vellasco, M., Costa da Silva, E.: Local interpretable model-agnostic explanations for classification of lymph node metastases. Sensors 19(13), 2969 (2019)
El-Hajj, C., Kyriacou, P.A.: Deep learning models for cuffless blood pressure monitoring from PPG signals using attention mechanism. Biomed. Signal Process. Control 65, 102301 (2021)
Santillan, B.G.: A step towards the applicability of algorithms based on invariant causal learning on observational data. arXiv preprint arXiv:2304.02286 (2023)
Xia, J.-F., et al.: APIS: accurate prediction of hot spots in protein interfaces by combining protrusion index with solvent accessibility. BMC Bioinform. 11, 1–14 (2010)
Nguyen, H.V., Byeon, H.: Prediction of Parkinson’s disease depression using LIME-based stacking ensemble model. Mathematics 11(3), 708 (2023)
Ranjbarzadeh, R., et al.: Brain tumor segmentation based on deep learning and an attention mechanism using MRI multi-modalities brain images. Sci. Rep. 11(1), 1–17 (2021)
Fan, G., Li, J., Hao, H.: Dynamic response reconstruction for structural health monitoring using densely connected convolutional networks. Struct. Health Monit. 20(4), 1373–1391 (2021)
Feichtinger, H., Onchis, D.M.: Constructive reconstruction from irregular sampling in multi- window spline-type spaces. In: General Proceedings of the 7th ISAAC Congress, London, pp. 257–265 (2010)
Ba, J.L., Kiros, J.R., Hinton, G.E.: Layer normalization. arXiv preprint arXiv:1607.06450 (2016)
Agarap, A.F.: Deep learning using rectified linear units (RELU). arXiv preprint arXiv:1803.08375 (2018)
Ding, B., Qian, H., Zhou, J.: Activation functions and their characteristics in deep neural networks. In: 2018 Chinese control and decision conference (CCDC). IEEE (2018)
Onchis, D.M., Gillich, G.-R.: Wavelet-type denoising for mechanical structures diagnosis. In: EMESEG 2010: Proceedings of the 3rd WSEAS International Conference on Engineering Mechanics, Structures, Engineering Geology, pp. 200–203 (2010)
Tato, A., Nkambou, R.: Improving adam optimizer (2018)
Yang, S., Berdine, G.: The receiver operating characteristic (ROC) curve. Southwest Respir. Crit. Care Chronicles 5(19), 34–36 (2017)
Gaianu, M., Onchis, D.M.: Face and marker detection using Gabor frames on GPUs. 96, 90–93, March 2014
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Costi, F., Onchis, D.M., Istin, C., Cozma, G.V. (2023). Explainability-Enhanced Neural Network for Thoracic Diagnosis Improvement. In: Tsapatsoulis, N., et al. Computer Analysis of Images and Patterns. CAIP 2023. Lecture Notes in Computer Science, vol 14184. Springer, Cham. https://doi.org/10.1007/978-3-031-44237-7_4
Download citation
DOI: https://doi.org/10.1007/978-3-031-44237-7_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-44236-0
Online ISBN: 978-3-031-44237-7
eBook Packages: Computer ScienceComputer Science (R0)