Nothing Special   »   [go: up one dir, main page]

Skip to main content

Multi-Target Domain Adaptation with Prompt Learning for Medical Image Segmentation

  • Conference paper
  • First Online:
Medical Image Computing and Computer Assisted Intervention – MICCAI 2023 (MICCAI 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14220))

  • 7627 Accesses

Abstract

Domain shift is a big challenge when deploying deep learning models in real-world applications due to various data distributions. The recent advances of domain adaptation mainly come from explicitly learning domain invariant features (e.g., by adversarial learning, metric learning and self-training). While they cannot be easily extended to multi-domains due to the diverse domain knowledge. In this paper, we present a novel multi-target domain adaptation (MTDA) algorithm, i.e., prompt-DA, through implicit feature adaptation for medical image segmentation. In particular, we build a feature transfer module by simply obtaining the domain-specific prompts and utilizing them to generate the domain-aware image features via a specially designed simple feature fusion module. Moreover, the proposed prompt-DA is compatible with the previous DA methods (e.g., adversarial learning based) and the performance can be continuously improved. The proposed method is evaluated on two challenging domain-shift datasets, i.e., the Iseg2019 (domain shift in infant MRI of different ages), and the BraTS2018 dataset (domain shift between high-grade and low-grade gliomas). Experimental results indicate our proposed method achieves state-of-the-art performance in both cases, and also demonstrates the effectiveness of the proposed prompt-DA. The experiments with adversarial learning DA show our proposed prompt-DA can go well with other DA methods. Our code is available at https://github.com/MurasakiLin/prompt-DA.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  2. Zhou, S., Nie, D., Adeli, E., Yin, J., Lian, J., Shen, D.: High-resolution encoder-decoder networks for low-contrast medical image segmentation. IEEE Trans. Image Process. 29, 461–475 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  3. Ouyang, C., Kamnitsas, K., Biffi, C., Duan, J., Rueckert, D.: Data efficient unsupervised domain adaptation for cross-modality image segmentation. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11765, pp. 669–677. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32245-8_74

    Chapter  Google Scholar 

  4. Xie, X., Chen, J., Li, Y., Shen, L., Ma, K., Zheng, Y.: MI\(^2\)GAN: generative adversarial network for medical image domain adaptation using mutual information constraint. In: Martel, A.L., et al. (eds.) MICCAI 2020. LNCS, vol. 12262, pp. 516–525. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59713-9_50

    Chapter  Google Scholar 

  5. Dou, Q., et al.: Pnp-adanet: plug-and-play adversarial domain adaptation network at unpaired cross-modality cardiac segmentation. IEEE Access 7, 99 065–99 076 (2019)

    Google Scholar 

  6. Chen, C., Dou, Q., Chen, H., Qin, J., Heng, P.-A.: Synergistic image and feature adaptation: Towards cross-modality domain adaptation for medical image segmentation. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, no. 01, pp. 865–872 (2019)

    Google Scholar 

  7. Cui, H., Yuwen, C., Jiang, L., Xia, Y., Zhang, Y.: Bidirectional cross-modality unsupervised domain adaptation using generative adversarial networks for cardiac image segmentation. Comput. Biol. Med. 136, 104726 (2021)

    Article  Google Scholar 

  8. Kumar, A., Ma, T., Liang, P.: Understanding self-training for gradual domain adaptation. In: International Conference on Machine Learning, pp. 5468–5479. PMLR (2020)

    Google Scholar 

  9. Sheikh, R., Schultz, T.: Unsupervised domain adaptation for medical image segmentation via self-training of early features. In: International Conference on Medical Imaging with Deep Learning, pp. 1096–1107. PMLR (2022)

    Google Scholar 

  10. Xie, Q., et al.: Unsupervised domain adaptation for medical image segmentation by disentanglement learning and self-training. IEEE Trans. Med. Imaging (2022)

    Google Scholar 

  11. Yang, C., Guo, X., Chen, Z., Yuan, Y.: Source free domain adaptation for medical image segmentation with Fourier style mining. Med. Image Anal. 79, 102457 (2022)

    Article  Google Scholar 

  12. Liu, X., Ji, K., Fu, Y., Du, Z., Yang, Z., Tang, J.: P-tuning v2: prompt tuning can be comparable to fine-tuning universally across scales and tasks. arXiv preprint arXiv:2110.07602 (2021)

  13. Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vision 130(9), 2337–2348 (2022)

    Article  Google Scholar 

  14. Jia, M., et al.: Visual prompt tuning. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) Computer Vision - ECCV 2022. Lecture Notes in Computer Science, vol. 13693, pp. 709–727. Springer, Cham (2022)

    Chapter  Google Scholar 

  15. Zheng, Z., Yue, X., Wang, K., You, Y.: Prompt vision transformer for domain generalization. arXiv preprint arXiv:2208.08914 (2022)

  16. Çiçek, Ö., Abdulkadir, A., Lienkamp, S.S., Brox, T., Ronneberger, O.: 3D U-net: learning dense volumetric segmentation from sparse annotation. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 424–432. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46723-8_49

    Chapter  Google Scholar 

  17. Chen, J., et al.: Transunet: transformers make strong encoders for medical image segmentation. arXiv preprint arXiv:2102.04306 (2021)

  18. Tzeng, E., Hoffman, J., Saenko, K., Darrell, T.: Adversarial discriminative domain adaptation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7167–7176 (2017)

    Google Scholar 

  19. Sudre, C.H., Li, W., Vercauteren, T., Ourselin, S., Jorge Cardoso, M.: Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations. In: Cardoso, M.J., et al. (eds.) DLMIA/ML-CDS -2017. LNCS, vol. 10553, pp. 240–248. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-67558-9_28

    Chapter  Google Scholar 

  20. Sun, Y., et al.: Multi-site infant brain segmentation algorithms: the ISEG-2019 challenge. IEEE Trans. Med. Imaging 40(5), 1363–1376 (2021)

    Article  Google Scholar 

  21. Bakas, S., et al.: Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the brats challenge. arXiv preprint arXiv:1811.02629 (2018)

  22. Hoffman, J., et al.: Cycada: cycle-consistent adversarial domain adaptation. In: International Conference on Machine Learning, pp. 1989–1998. PMLR (2018)

    Google Scholar 

  23. Chen, C., Dou, Q., Chen, H., Qin, J., Heng, P.A.: Unsupervised bidirectional cross-modality adaptation via deeply synergistic image and feature alignment for medical image segmentation. IEEE Trans. Med. Imaging 39(7), 2494–2505 (2020)

    Article  Google Scholar 

  24. Zeng, G., et al.: Semantic consistent unsupervised domain adaptation for cross-modality medical image segmentation. In: de Bruijne, M., et al. (eds.) MICCAI 2021. LNCS, vol. 12903, pp. 201–210. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87199-4_19

    Chapter  Google Scholar 

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China (No. 62001222), the China Postdoctoral Science Foundation funded project (No. 2021TQ0150 and No. 2021M701699).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xuyun Wen .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 437 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Lin, Y., Nie, D., Liu, Y., Yang, M., Zhang, D., Wen, X. (2023). Multi-Target Domain Adaptation with Prompt Learning for Medical Image Segmentation. In: Greenspan, H., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2023. MICCAI 2023. Lecture Notes in Computer Science, vol 14220. Springer, Cham. https://doi.org/10.1007/978-3-031-43907-0_68

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-43907-0_68

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-43906-3

  • Online ISBN: 978-3-031-43907-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics