Nothing Special   »   [go: up one dir, main page]

Skip to main content

Confidence Matters: Enhancing Medical Image Classification Through Uncertainty-Driven Contrastive Self-distillation

  • Conference paper
  • First Online:
Medical Image Computing and Computer Assisted Intervention – MICCAI 2024 (MICCAI 2024)

Abstract

The scarcity of data in medical image classification using deep learning often leads to overfitting the training data. Research indicates that self-distillation techniques, particularly those employing mean teacher ensembling, can alleviate this issue. However, directly transferring knowledge distillation (KD) from computer vision to medical image classification yields subpar results due to higher intra-class variance and class imbalance in medical images. This can cause supervised and contrastive learning-based solutions to become biased towards the majority class, resulting in misclassification. To address this, we propose UDCD, an uncertainty-driven contrastive learning-based self-distillation framework that regulates the transfer of contrastive and supervised knowledge, ensuring only relevant knowledge is transferred from the teacher to the student for fine-grained knowledge transfer. By controlling the outcome of the transferable contrastive and teacher’s supervised knowledge based on confidence levels, our framework better classifies images under higher intra- and inter-relation constraints with class imbalance raised due to data scarcity, distilling only useful knowledge to the student. Extensive experiments conducted on benchmark datasets such as HAM10000 and APTOS validate the superiority of our proposed method. The code is available at https://github.com/philsaurabh/UDCD_MICCAI.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Codella, N.C., et al.: Skin lesion analysis toward melanoma detection: a challenge at the 2017 international symposium on biomedical imaging (ISBI), hosted by the international skin imaging collaboration (ISIC). In: 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), pp. 168–172. IEEE (2018)

    Google Scholar 

  2. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  3. Hinton, G.E., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015)

  4. Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017)

    Google Scholar 

  5. Huber, P.J.: Robust estimation of a location parameter. Ann. Math. Stat. 35, 492–518 (1964)

    Article  MathSciNet  Google Scholar 

  6. Karthik, Maggie, S.D.: Aptos 2019 blindness detection (2019). https://kaggle.com/competitions/aptos2019-blindness-detection

  7. Khosla, P., et al.: Supervised contrastive learning. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., Lin, H. (eds.) Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, 6–12 December 2020, Virtual (2020)

    Google Scholar 

  8. Lee, Y., Willette, J.R., Kim, J., Lee, J., Hwang, S.J.: Exploring the role of mean teachers in self-supervised masked auto-encoders. In: The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, 1–5 May 2023. OpenReview.net (2023). https://openreview.net/pdf?id=7sn6Vxp92xV

  9. Li, L., Lin, Y., Ren, S., Li, P., Zhou, J., Sun, X.: Dynamic knowledge distillation for pre-trained language models. In: Moens, M., Huang, X., Specia, L., Yih, S.W. (eds.) Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event/Punta Cana, 7–11 November 2021, pp. 379–389. Association for Computational Linguistics (2021). https://doi.org/10.18653/V1/2021.EMNLP-MAIN.31

  10. Liu, Q., Yu, L., Luo, L., Dou, Q., Heng, P.A.: Semi-supervised medical image classification with relation-driven self-ensembling model. IEEE Trans. Med. Imaging 39(11), 3429–3440 (2020)

    Article  Google Scholar 

  11. Park, W., Kim, D., Lu, Y., Cho, M.: Relational knowledge distillation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3967–3976 (2019)

    Google Scholar 

  12. Qin, D., et al.: Efficient medical image segmentation based on knowledge distillation. IEEE Trans. Med. Imaging 40(12), 3820–3831 (2021)

    Google Scholar 

  13. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.C.: Mobilenetv2: inverted residuals and linear bottlenecks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4510–4520 (2018)

    Google Scholar 

  14. Sharma, S., Lodhi, S.S., Chandra, J.: SCL-IKD: intermediate knowledge distillation via supervised contrastive representation learning. Appl. Intell. 53(23), 28520–28541 (2023)

    Google Scholar 

  15. Tian, Y., Krishnan, D., Isola, P.: Contrastive representation distillation. In: 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, 26–30 April 2020. OpenReview.net (2020). https://openreview.net/forum?id=SkgpBJrtvS

  16. Tschandl, P., Rosendahl, C., Kittler, H.: The ham10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Scientific Data 5(1), 1–9 (2018)

    Article  Google Scholar 

  17. Tung, F., Mori, G.: Similarity-preserving knowledge distillation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1365–1374 (2019)

    Google Scholar 

  18. Wang, G., Wang, K., Wang, G., Torr, P.H., Lin, L.: Solving inefficiency of self-supervised representation learning. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 9505–9515 (2021)

    Google Scholar 

  19. Wang, Y., Wang, Y., Cai, J., Lee, T.K., Miao, C., Wang, Z.J.: SSD-KD: a self-supervised diverse knowledge distillation method for lightweight skin lesion classification using dermoscopic images. Med. Image Anal. 84, 102693 (2023). https://doi.org/10.1016/j.media.2022.102693

    Article  Google Scholar 

  20. Wu, Z., Xiong, Y., Yu, S.X., Lin, D.: Unsupervised feature learning via non-parametric instance discrimination. In: 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, 18–22 June 2018, pp. 3733–3742. Computer Vision Foundation/IEEE Computer Society (2018). https://doi.org/10.1109/CVPR.2018.00393

  21. Xing, X., Hou, Y., Li, H., Yuan, Y., Li, H., Meng, M.Q.-H.: Categorical relation-preserving contrastive knowledge distillation for medical image classification. In: de Bruijne, M., et al. (eds.) MICCAI 2021. LNCS, vol. 12905, pp. 163–173. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87240-3_16

  22. Xu, G., Liu, Z., Li, X., Loy, C.C.: Knowledge distillation meets self-supervision. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12354, pp. 588–604. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58545-7_34

    Chapter  Google Scholar 

  23. Zhang, J., Xie, Y., Xia, Y., Shen, C.: Attention residual learning for skin lesion classification. IEEE Trans. Med. Imaging 38(9), 2092–2103 (2019)

    Article  Google Scholar 

Download references

Acknowledgments

The authors of this paper wish to express their gratitude for the assistance provided by the Prime Minister’s Research Fellowship (PMRF) scheme of the Government of India, which facilitated the execution of this research work.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Saurabh Sharma .

Editor information

Editors and Affiliations

Ethics declarations

Disclosure of Interests

The authors have no competing interests to declare relevant to this article’s content.

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 1188 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Sharma, S., Kumar, A., Chandra, J. (2024). Confidence Matters: Enhancing Medical Image Classification Through Uncertainty-Driven Contrastive Self-distillation. In: Linguraru, M.G., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2024. MICCAI 2024. Lecture Notes in Computer Science, vol 15010. Springer, Cham. https://doi.org/10.1007/978-3-031-72117-5_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-72117-5_13

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-72116-8

  • Online ISBN: 978-3-031-72117-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics