Nothing Special   »   [go: up one dir, main page]

Skip to main content

Weakly Supervised Confidence Learning for Brain MR Image Dense Parcellation

  • Conference paper
  • First Online:
Machine Learning in Medical Imaging (MLMI 2019)

Abstract

Automatic dense parcellation of brain MR image, which labels hundreds of regions of interest (ROIs), plays an important role for neuroimage analysis. Specifically, the brain image parcellation using deep learning technology has been widely recognized for its effective performance, but it remains limited in actual application due to its high demand for sufficient training data and intensive memory allocation of GPU resources. Due to the high cost of manual segmentation, it is usually not feasible to provide large dataset for training the network. On the other hand, it is relatively easy to transfer labeling information to many new unlabeled datasets and thus augment the training data. However, the augmented data can only be considered as weakly labeled for training. Therefore, in this paper, we propose a cascaded weakly super- vised confidence integration network (CINet). The main contributions of our method are two-folds. First, we propose the image registration-based data argumentation method, and evaluate the confidence of the labeling information for each augmented image. The augmented data, as well as the original yet small training dataset, contribute to the modeling of the CINet jointly for segmentation. Second, we propose the random crop strategy to handle the large amount of feature channels in the network, which are needed to label hundreds of neural ROIs. The demanding requirement to GPU memory is thus relieved, while better accuracy can also be achieved. In experiments, we use 37 manually labeled subjects and augment 96 images with weak labels for training. The testing result in overall Dice score over 112 brain regions reaches 75%, which is higher than using the original training data only.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Avants, B.B., Epstein, C.L., Grossman, M., Gee, J.C.: Symmetric diffeomorphic image registration with cross-correlation: evaluating automated labeling of elderly and neurodegenerative brain. Med. Image Anal. 12(1), 26–41 (2008)

    Article  Google Scholar 

  2. Balakrishnan, G., Zhao, A., Sabuncu, M.R., Guttag, J., Dalca, A.V.: Voxelmorph: A learning framework for deformable medical image registration PP(99), 1 (2018)

    Google Scholar 

  3. Collins, D.L., et al.: Design and construction of a realistic digital brain phantom. IEEE Trans. Med. Imaging 17(3), 463–468 (1998)

    Article  Google Scholar 

  4. Dalca, A.V., Balakrishnan, G., Guttag, J., Sabuncu, M.R.: Unsupervised learning for fast probabilistic diffeomorphic registration. In: Frangi, A., Schnabel, J., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11070, pp. 729–738. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00928-1_82

    Chapter  Google Scholar 

  5. Fang, L., et al.: Automatic brain labeling via multi-atlas guided fully convolutional networks. Med. Image Anal. 51, 157–168 (2019)

    Article  Google Scholar 

  6. Huo, Y., et al.: Spatially localized atlas network tiles enables 3D whole brain segmentation from limited data. In: Frangi, A., Schnabel, J., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11072, pp. 698–705. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00931-1_80

    Chapter  Google Scholar 

  7. Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440 (2015)

    Google Scholar 

  8. Milletari, F., Navab, N., Ahmadi, S.A.: V-net: fully convolutional neural networks for volumetric medical image segmentation. In: Fourth International Conference on 3D Vision (2016)

    Google Scholar 

  9. Zlateski, A., Jaroensri, R., Sharma, P., Durand, F.: On the importance of label quality for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1479–1487 (2018)

    Google Scholar 

Download references

Acknowledgement

This research was supported by the grants from the National Key Research and Development Program of China (No. 2018YFC0116400).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Feng Shi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Xiao, B. et al. (2019). Weakly Supervised Confidence Learning for Brain MR Image Dense Parcellation. In: Suk, HI., Liu, M., Yan, P., Lian, C. (eds) Machine Learning in Medical Imaging. MLMI 2019. Lecture Notes in Computer Science(), vol 11861. Springer, Cham. https://doi.org/10.1007/978-3-030-32692-0_47

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-32692-0_47

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-32691-3

  • Online ISBN: 978-3-030-32692-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics