Abstract
Deep learning models are subject to failure when inferring upon out-of-distribution (OOD) data, i.e., data that differs from the models’ train data. Within medical image settings, OOD data can be subtle and non-obvious to the human observer. Thus, developing highly sensitive algorithms is critical to automatically detect medical image OOD data. Previous works have demonstrated the utility of using the distance between embedded train and test features as an OOD measure. These methods, however, do not consider variations in feature importance to the prediction task, treating all features equally. In this work, we propose a method to enhance distance-based OOD measures via feature importance weighting, which is determined through an information bottleneck optimization process. We demonstrate the utility of the weighted OOD measure within the metastatic liver tumor segmentation task and compare its performance to its non-weighted counterpart in two assessments. The weighted OOD measure enhanced the detection of artificially perturbed data, where greater benefit was observed for smaller perturbations (e.g., AUC = 0.8 vs. AUC = 0.72). In addition, the weighted OOD measure achieved better correlation to liver tumor segmentation Dice coefficient (e.g., ρ = −0.76 vs ρ = −0.21). In summary, this work demonstrates the benefit of feature importance weighting for distance-based OOD detection.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
Additional figures supporting these results are included as supplementary material.
References
Nguyen, A., Yosinski, J., Clune, J.: Deep neural networks are easily fooled: high confidence predictions for unrecognizable images. In: CVPR, pp. 427–436 (2015)
Szegedy, C., et al.: Intriguing properties of neural networks. In: ICLR (2014)
Soin, A., et al.: CheXstray: real-time multi-modal data concordance for drift detection in medical imaging AI. arXiv. arXiv:2202.02833. (2022)
Kelly, C.J., Karthikesalingam, A., Suleyman, M., Corrado, G., King, D.: Key challenges for delivering clinical impact with artificial intelligence. In: BMC Med, vol. 17, (2019). https://doi.org/10.1186/s12916-019-1426-2
Yang, J., Zhou, K., Li, Y., Liu, Z.: Generalized out-of-distribution detection: a survey. arXiv. arXiv:2110.11334. (2021)
Liang, S., Li, Y., Srikant, R.: Enhancing the reliability of out-of-distribution image detection in neural networks. In: ICLR (2018)
Huang, R., Geng, A., Li, Y.: On the importance of gradients for detecting distributional shifts in the wild. In: NeurIPS (2021)
Gal, Y., Ghahramani, Z.: Dropout as a bayesian approximation: representing model uncertainty in deep learning Zoubin Ghahramani. In: ICML (2016)
Lakshminarayanan, B., Pritzel, A., Blundell, C.: Simple and scalable predictive uncertainty estimation using deep ensembles. In: NeurIPS (2017)
Nalisnick, E., Matsukawa, A., Teh, Y.W., Gorur, D., Lakshminarayanan, B.: Do deep generative models know what they don’t know? In: ICLR (2019)
Meissen, F., Wiestler, B., Kaissis, G., Rueckert, D.: On the pitfalls of using the residual error as anomaly score. In: MIDL (2022)
Denouden, T., Salay, R., Czarnecki, K., Abdelzad, V., Phan, B., Vernekar, S.: Improving reconstruction autoencoder out-of-distribution detection with mahalanobis distance. arXiv. arXiv:1812.02765. (2018)
Huang, H., Li, Z., Wang, L., Chen, S., Dong, B., Zhou, X.: Feature space singularity for out-of-distribution detection. arXiv. arXiv:2011.14654. (2020)
Lee, K., Lee, K., Lee, H., Shin, J.: A simple unified framework for detecting out-of-distribution samples and adversarial attacks. In: NeurIPS (2018)
Sun, Y., Ming, Y., Zhu, X., Li, Y.: Out-of-distribution detection with deep nearest neighbors. In: ICML (2022)
González, C., et al.: Distance-based detection of out-of-distribution silent failures for Covid-19 lung lesion segmentation. Med Image Anal. 82 (2022). https://doi.org/10.1016/j.media.2022.102596
Karimi, D., Gholipour, A.: Improving calibration and out-of-distribution detection in deep models for medical image segmentation. IEEE Trans. Artif. Intell. 4, 383–397 (2023). https://doi.org/10.1109/TAI.2022.3159510
Samek, W., Wiegand, T., Müller, K.-R.: Explainable artificial intelligence: understanding, visualizing and interpreting deep learning models. arXiv. arXiv:1708.08296. (2017)
Tishby, N., Pereira, F.C., Bialek, W.: The information bottleneck method. arXiv. arXiv:physics/0004057. (2000)
Zhmoginov, A., Fischer, I., Sandler, M.: Information-bottleneck approach to salient region discovery. In: ECML PKDD (2020)
Schulz, K., Sixt, L., Tombari, F., Landgraf, T.: Restricting the flow: information bottlenecks for attribution. In: ICLR (2020)
Alemi, A.A., Fischer, I., Dillon, J. V., Murphy, K.: Deep variational information bottleneck. In: ICLR (2017)
Klambauer, G., Unterthiner, T., Mayr, A., Hochreiter, S.: Self-normalizing neural networks. In: NeurIPS (2017)
Bilic, P., Christ, P., Li, H.B., Vorontsov, E., Ben-Cohen, A., Kaissis, et al.: The liver tumor segmentation benchmark (LiTS). Med Image Anal. 84 (2023). https://doi.org/10.1016/j.media.2022.102680
Isensee, F., Jaeger, P.F., Kohl, S.A.A., Petersen, J., Maier-Hein, K.H.: NnU-net: a self-configuring method for deep learning-based biomedical image segmentation. Nat. Methods 18, 203–211 (2021). https://doi.org/10.1038/s41592-020-01008-z
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: ICLR (2015)
Schwaiger, A., Sinhamahapatra, P., Gansloser, J., Roscher, K.: Is Uncertainty quantification in deep learning sufficient for out-of-distribution detection? In: AISafety@IJCAI (2020)
Ovadia, Y., et al.: Can you trust your model’s uncertainty? evaluating predictive uncertainty under dataset shift. In: NeurIPS (2019)
Liu, Y., Pagliardini, M., Chavdarova, T., Stich, S.U.: The peril of popular deep learning uncertainty estimation methods. In: NeurIPS (2021)
Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: ICDT (2001)
Acknowledgments
This research was supported by the University of Wisconsin Carbone Cancer Center.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Ethics declarations
Author Robert Jeraj, PhD is the Chief Scientific Officer and a co-founder of AIQ Solutions, a quantitative medical image analysis software company.
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Schott, B. et al. (2025). Information Bottleneck-Based Feature Weighting for Enhanced Medical Image Out-of-Distribution Detection. In: Sudre, C.H., Mehta, R., Ouyang, C., Qin, C., Rakic, M., Wells, W.M. (eds) Uncertainty for Safe Utilization of Machine Learning in Medical Imaging. UNSURE 2024. Lecture Notes in Computer Science, vol 15167. Springer, Cham. https://doi.org/10.1007/978-3-031-73158-7_12
Download citation
DOI: https://doi.org/10.1007/978-3-031-73158-7_12
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-73157-0
Online ISBN: 978-3-031-73158-7
eBook Packages: Computer ScienceComputer Science (R0)