Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

Active Learning for Efficient Segmentation of Liver with Convolutional Neural Network–Corrected Labeling in Magnetic Resonance Imaging–Derived Proton Density Fat Fraction

  • Published:
Journal of Digital Imaging Aims and scope Submit manuscript

Abstract

This study aimed to propose an efficient method for self-automated segmentation of the liver using magnetic resonance imaging–derived proton density fat fraction (MRI-PDFF) through deep active learning. We developed an active learning framework for liver segmentation using labeled and unlabeled data in MRI-PDFF. A total of 77 liver samples on MRI-PDFF were obtained from patients with nonalcoholic fatty liver disease. For the training, tuning, and testing of the liver segmentation, the ground truth of 71 (internal) and 6 (external) MRI-PDFF scans for training and testing were verified by an expert reviewer. For 100 randomly selected slices, manual and deep learning (DL) segmentations for visual assessments were classified, ranging from very accurate to mostly accurate. The dice similarity coefficients for each step were 0.69 ± 0.21, 0.85 ± 0.12, and 0.94 ± 0.01, respectively (p-value = 0.1389 between the first step and the second step or p-value = 0.0144 between the first step and the third step for paired t-test), indicating that active learning provides superior performance compared with non-active learning. The biases in the Bland-Altman plots for each step were − 24.22% (from − 82.76 to − 2.70), − 21.29% (from − 59.52 to 3.06), and − 0.67% (from − 10.43 to 4.06). Additionally, there was a fivefold reduction in the required annotation time after the application of active learning (2 min with, and 13 min without, active learning in the first step). The number of very accurate slices for DL (46 slices) was greater than that for manual segmentations (6 slices). Deep active learning enables efficient learning for liver segmentation on a limited MRI-PDFF.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Availability of Data and Material and Code Availability

Even though the limitations of our dataset (KUAH and KUANH) for public use are regulated by the Personal Information Protection Act in South Korea, we could expect to share our datasets and source code as requested.

Abbreviations

BLS:

Boundary loss

BCE:

Binary cross-entropy

CNN:

Convolutional neural network

CRFs:

Conditional random fields

CT:

Computed tomography

DL:

Deep learning

DLS:

Dice loss

DSC:

Dice similarity coefficient

FCN:

Fully convolutional network

MRI-PDFF:

Magnetic resonance imaging–derived proton density fat fraction

RPN:

Region proposal network

3D:

Three dimensional

References

  1. Rao M, et al. Comparison of human and automatic segmentations of kidneys from CT images. Int J Radiat Oncol Biol Phys. 2005;61:954–960.

    Article  Google Scholar 

  2. Pham DL, Xu C, Prince JL. Current methods in medical image segmentation. Annu Rev Biomed Eng. 2000;2:315-337.

    Article  CAS  Google Scholar 

  3. Chen H, Qi X, Cheng JZ, et al. Deep contextual networks for neuronal structure segmentation. AAAI’16: Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence. February, 2016;1167–1173.

  4. Cai H, Verma R, Ou Y, et al. Probabilistic segmentation of brain tumors based on multi-modality magnetic resonance images, 2007 4th IEEE international symposium on biomedical imaging: from nano to macro. IEEE. 2007;600–603

    Google Scholar 

  5. Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation. In: Medical Image Computing and Computer Assisted Intervention. 2015;234–241.

  6. Zheng S, et al. Conditional random fields as recurrent neural networks. In Proceedings of the IEEE International Conference on Computer Vision. 2015;1529–1537.

  7. Noh H, Hong S, Han B. Learning deconvolution network for semantic segmentation. In Proceedings of the IEEE International Conference on Computer Vision. 2015;1520–1528.

  8. Shelhamer E, Long J, Darrell T. Fully convolutional networks for semantic segmentation. IEEE Trans Pattern Anal Mach Intell. 2017;39:640–651.

    Article  Google Scholar 

  9. Chen LC, Papandreou G, Kokkinos I, et al. Deeplab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans Pattern Anal Mach Intell. 2018;40:834–848.

    Article  Google Scholar 

  10. Çiçek Ö, Abdulkadir A, Lienkamp SS, et al. 3D U-Net: learning dense volumetric segmentation from sparse annotation. International Conference on Medical Image Computing and Computer-assisted Intervention. 2016;424–432.

  11. Isensee F, Petersen J, Kohl SAA, et al. nnU-Net: breaking the spell on successful medical image segmentation. ArXiv e-prints https://arxiv.org/abs/1904.08128; 2019.

  12. Christ PF, et al. Automatic liver and lesion segmentation in CT using cascaded fully convolutional neural networks and 3D conditional random fields. International Conference on Medical Image Computing and Computer-Assisted Intervention. 2016;415–423.

  13. Tang M, Zhang Z, Cobzas D, et al. Segmentation-by-detection: a cascade network for volumetric medical image segmentation. 2018 IEEE 15th International Symposium on Biomedical Imaging. 2018;1356–1359.

  14. Roth HR, et al. An application of cascaded 3D fully convolutional networks for medical image segmentation. Comput Med Imag Grap. 2018;66:90-99.

    Article  Google Scholar 

  15. Cui S, Mao L, Jiang J, et al. Automatic semantic segmentation of brain gliomas from MRI images using a deep cascaded neural network. J Healthc Eng. 2018;4940593.

  16. He Y, et al. Towards topological correct segmentation of macular OCT from cascaded FCNs. Fetal Infant Ophthalmic Med Image Anal. 2017;10554:202–209. https://doi.org/10.1007/978-3-319-67561-9_23.

    Article  Google Scholar 

  17. Gorriz M, Carlier A, Faure E, et al. Cost-effective active learning for melanoma segmentation. ArXiv e-prints https://arxiv.org/abs/1711.09168; 2017.

  18. Yang L, Zhang Y, Chen J, et al. Suggestive annotation: a deep active learning framework for biomedical image segmentation. International Conference on Medical Image Computing and Computer-assisted Intervention. 2017;399–407.

  19. Kasarla T, Nagendar G, Hegde G, et al. Region-based active learning for efficient labeling in semantic segmentation. IEEE Winter Conference on Applications of Computer Vision (WACV). https://doi.org/10.1109/WACV.2019.00123; 2019.

  20. Mackowiak R, et al. CEREALS - Cost-effective region-based active learning for semantic segmentation. ArXiv e-prints https://arxiv.org/abs/1810.09726; 2018.

  21. Lubrano di Scandalea M, Perone CS, Boudreau M, et al. Deep active learning for axon-myelin segmentation on histology data. ArXiv e-prints https://arxiv.org/abs/1907.05143v1; 2019.

  22. Si Wen S, et al. Comparison of different classifiers with active learning to support quality control in nucleus segmentation in pathology images. AMIA Jt Summits Transl Sci Proc. 2018;227–236.

  23. Sourati J, Gholipour A, Dy JG, et al. Active deep learning with fisher information for patch-wise semantic segmentation. Deep Learn Med Image Anal Multimodal Learn Clin Decis Support. 2018;11045:83–91. https://doi.org/10.1007/978-3-030-00889-5_10.

    Article  Google Scholar 

  24. Kim T, et al. Active learning for accuracy enhancement of semantic segmentation with CNN-corrected label curations: evaluation on kidney segmentation in abdominal CT. Sci Rep. 2020;10:366.

    Article  CAS  Google Scholar 

  25. Wang G, et al. Interactive medical image segmentation using deep learning with image-specific fine tuning. IEEE Trans Med Imaging. 2018;37:1562–1573.

    Article  Google Scholar 

  26. Idilman IS, Aniktar H, Idilman R, et al. Hepatic steatosis: quantification by proton density fat fraction with MR imaging versus liver biopsy. Radiology. 2013;267(3):767–75.https://doi.org/10.1148/radiol.13121360.

    Article  PubMed  Google Scholar 

  27. Kohl S, et al. A probabilistic U-Net for segmentation of ambiguous images. Adv Neural Inf Process Syst. 2018;31:6965–6975.

    Google Scholar 

  28. Zhang D, Meng D, Han J. Co-saliency detection via a self-paced multiple-instance learning framework. IEEE Trans Pattern Anal Mach Intell. 2016;39:865–878

    Article  CAS  Google Scholar 

  29. Junho Kim, Minjae Kim, Hyeonwoo Kang, Kwanghee Lee, U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation. arXiv:1907.10830, 2019.

  30. Qin Y, Kamnitsas K, Ancha S, et al. Autofocus layer for semantic segmentation. https://arxiv.org/abs/1805.08403, arXiv:1805.08403. 2018.

  31. Kervadec H, Bouchtiba J, Desrosiers C, et al. Boundary loss for highly unbalanced segmentation, https://arxiv.org/pdf/1812.07032.pdf; arXiv:1812.07032v2. 2018

  32. Altman DG, Bland JM (1983). "Measurement in medicine: the analysis of method comparison studies". The Statistician. 32(3):307–317. https://doi.org/10.2307/2987937. JSTOR 2987937.

    Article  Google Scholar 

Download references

Funding

The authors received funding for this study from the Basic Science Research Program through the National Research Foundation of Korea funded by the Ministry of Education (2020R1I1A1A01071600).

Author information

Authors and Affiliations

Authors

Contributions

Y.C. and M.J.K. wrote the main manuscript. Y.C. performed the experiments and prepared the figures. B.J.P., K.C.S., Y.S.K., Y.E.H., D.J.S., and N.Y.H. prepared the dataset and confirmed the datasets. M.J.K. confirmed the datasets. All authors reviewed the manuscript. All authors were involved in writing the paper and approved the final submitted and published versions.

Corresponding author

Correspondence to Min Ju Kim.

Ethics declarations

Ethics Approval

Our institutional review board approved this retrospective case-controlled study, and informed consent was waived. Experiments involving humans and/or the use of human tissue samples have not been performed in this study. In addition, no organs/tissues were procured from prisoners in this study.

Conflict of Interest

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Key Points

• Our study enables efficient self-automated segmentation of the liver in magnetic resonance imaging–derived proton density fat fraction through deep active learning

• Active learning helped improve abdominal liver segmentation where limited clinical training datasets were available. The results of the 3D nnU-net with active learning were better than those of the 2D nnU-net with active learning

• Our results demonstrate that a deep active learning framework (human-in-the-loop) can lower the cost and effort of annotation by efficiently training using limited 3D MRI-PDFF datasets

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cho, Y., Kim, M.J., Park, B.J. et al. Active Learning for Efficient Segmentation of Liver with Convolutional Neural Network–Corrected Labeling in Magnetic Resonance Imaging–Derived Proton Density Fat Fraction. J Digit Imaging 34, 1225–1236 (2021). https://doi.org/10.1007/s10278-021-00516-4

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10278-021-00516-4

Keywords

Navigation