Esophagus Segmentation in CT Images via Spatial Attention Network and STAPLE Algorithm
<p>The proposed spatial attention module.</p> "> Figure 2
<p>Details of our proposed model.</p> "> Figure 3
<p>The overall architecture of our method.</p> "> Figure 4
<p>3D visualization of esophagus segmentation results from patient 49th in the test set of SegTHOR dataset. The number indicates the Dice score, which was obtained from the organizer of the challenge. (<b>a</b>) Result from the weight of fold-1-trained; (<b>b</b>) Result from the weight of fold-2-trained. (<b>c</b>) Result from the weight of fold-3-trained. (<b>d</b>) Result from the weight of fold-4-trained. (<b>e</b>) Result from our method.</p> "> Figure 5
<p>Example of OARs from StructSeg 2019 dataset. (<b>a</b>) 2D image; (<b>b</b>) 3D image. Each OAR is shown in a different color. The green region is the left lung; the red region is the right lung; the pink region is the spinal cord; the turquoise is the trachea; the blue region is the heart, and the yellow region is the esophagus.</p> "> Figure 6
<p>Example of OARs from SegTHOR ISBI 2019 dataset. (<b>a</b>) 2D image; (<b>b</b>) 3D image. Each OAR is shown in a different color. The green region is the heart; the yellow region is the esophagus; the blue region is the trachea; the red region is the aorta.</p> "> Figure 7
<p>This graph shows the average Dice score value in esophagus segmentation of our method compared to using a separate weight. The performance got from the test set of SegTHOR ISBI 2019 challenge.</p> "> Figure 8
<p>Visualization of esophagus segmentation results from validation set. The yellow area indicates the segmented region. (<b>a</b>) Small 2D patch; (<b>b</b>) Result of U-Net-cbam-resnet34; (<b>c</b>) Result of U-Net-cbam-seresnext50; (<b>d</b>) Result of U-Net-scse-resnet34; (<b>e</b>) Result of U-Net-scse-seresnext50; (<b>f</b>) Result of U-Net-no_att-resnet34; (<b>g</b>) Result of U-Net-no_att-se_resnext50; (<b>h</b>) Result of our method; (<b>i</b>) Ground Truth.</p> ">
Abstract
:1. Introduction
- We proposed an automated framework for segmentation of esophagus with high accuracy. The segmentation framework also can be applied to other types of organs. The ablation study showed that it achieved competitive results compared to the state-of-the-art ones.
- The proposed model takes advantage of the spatial information from the attention module. With a larger receptive field from the atrous spatial pyramid pooling module, the feature of the esophagus is better captured. Also, we employ GN in our model to get high performance and stable results.
- We construct the segmented image into two-dimensional (2D) and 3D images. Thus, they can assist doctors or specialists better than only shown in one kind of 2D or 3D.
- Experimental results from two public datasets SegTHOR and StructSeg, demonstrate our results in segmentation of esophagus outperformed the state-of-the-art methods.
2. Related Works
2.1. Thoracic Organs at Risk Segmentation
2.2. Esophagus Segmentation
3. Materials and Methods
3.1. Spatial Attention Module
3.2. The Proposed Method
3.3. Post Processing Step with STAPLE Algorithm
4. Experimental Results
4.1. Dataset
4.1.1. StructSeg 2019 Dataset
4.1.2. SegTHOR Dataset
4.2. Evaluation Metrics
4.3. Training Model
4.3.1. With StructSeg Dataset
4.3.2. With SegTHOR Dataset
4.4. Performance
4.4.1. With SegTHOR Dataset
4.4.2. With StructSeg Dataset
4.4.3. Discussions
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Ezzell, G.A.; Galvin, J.M.; Low, D.; Palta, J.R.; Rosen, I.; Sharpe, M.B.; Xia, P.; Xiao, Y.; Xing, L.; Cedric, X.Y. Guidance document on delivery, treatment planning, and clinical implementation of imrt: Report of the imrt subcommittee of the aapm radiation therapy committee. Med. Phys. 2003, 30, 2089–2115. [Google Scholar] [CrossRef] [PubMed]
- Mackie, T.R.; Kapatoes, J.; Ruchala, K.; Lu, W.; Wu, C.; Olivera, G.; Forest, L.; Tome, W.; Welsh, J.; Jeraj, R. Image guidance for precise confomal radiotherapy. Int. J. Radiat. Oncol. Biol. Phys. 2003, 56, 89–105. [Google Scholar] [CrossRef]
- Fechter, T.; Adebahr, S.; Baltas, D.; Ben, A.I.; Desrosiers, C.; Dolz, J. Esophagus segmentation in ct via 3D fully convolutional neural network and random walk. Med. Phys. 2017, 44, 6341–6352. [Google Scholar] [CrossRef] [Green Version]
- Trullo, R.; Petitjean, C.; Nie, D.; Shen, D.; Ruan, S. Fully automated esophagus segmentation with a hierarchical deep learning approach. In Proceedings of the IEEE International Conference on Signal and Image Processing Applications ICSIPA, Kuching, Malaysia, 12–14 September 2017; pp. 503–506. [Google Scholar]
- Chen, S.; Yang, H.; Fu, J.; Mei, W.; Ren, S.; Liu, Y.; Zhu, Z.; Liu, L.; Li, H.; Chen, H. U-Net Plus: Deep Semantic Segmentation for Esophagus and Esophageal Cancer in Computed Tomography Images. IEEE Access 2019, 7, 82867–82877. [Google Scholar] [CrossRef]
- Huang, G.; Zhu, J.; Li, J.; Wang, Z.; Cheng, L.; Liu, L.; Li, H.; Zhou, J. Channel-attention U-Net: Channel attention mechanism for semantic segmentation of esophagus and esophageal cancer. IEEE Access 2020, 8, 122798–122810. [Google Scholar] [CrossRef]
- Diniz, J.O.B.; Ferreira, J.L.; Diniz, P.H.B.; Silva, A.C.; Paiva, A.C. Esophagus segmentation from planning ct images using an atlas-based deep learning approach. Comput. Methods Programs Biomed. 2020, 197. [Google Scholar] [CrossRef]
- Lou, X.; Zhu, Y.; Punithakumar, K.; Le, L.H.; Li, B. Esophagus segmentation in computed tomography images using a U-Net neural network with a semiautomatic labeling method. IEEE Access 2020, 8, 202459–202468. [Google Scholar] [CrossRef]
- Chen, L.C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 834–848. [Google Scholar] [CrossRef] [PubMed]
- Wu, Y.; He, K. Group Normalization. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
- Zhou, X.; Yang, G. Normalization in Training U-Net for 2-D Biomedical Semantic Segmentation. IEEE Robot. Autom. Lett. 2019, 4, 1792–1799. [Google Scholar] [CrossRef] [Green Version]
- Gadosey, P.K.; Li, Y.; Agyekum, E.A.; Zhang, T.; Liu, Z.; Yamak, P.T.; Essaf, F. SD-UNet: Stripping down U-Net for Segmentation of Biomedical Images on Platforms with Low Computational Budgets. Diagnostics 2020, 10, 110. [Google Scholar] [CrossRef] [Green Version]
- Yang, J.; Zhu, J.; Wang, H.; Yang, X. Dilated MultiResUNet: Dilated Multiresidual Blocks Network Based on U-Net for Biomedical Image Segmentation. Biomed. Signal Process. Control. 2021, 68, 102643. [Google Scholar] [CrossRef]
- Iglesias, J.E.; Sabuncu, M.R. Multi-atlas segmentation of biomedical images: A survey. Med. Image Anal. 2015, 24, 205–219. [Google Scholar] [CrossRef] [Green Version]
- Isgum, I.; Staring, M.; Rutten, A.; Prokop, M.; Viergever, M.A.; Ginneken, B.V. Multi-atlas-based segmentation with local decision fusion application to cardiac and aortic segmentation in ct scans. IEEE Trans. Imaging 2009, 28, 1000–1010. [Google Scholar] [CrossRef]
- Aljabar, P.; Heckemann, R.A.; Hammers, A.; Hajnal, J.V.; Rueckert, D. Multi-atlas based segmentation of brain images: Atlas selection and its effect on accuracy. Neuroimage 2009, 46, 726–738. [Google Scholar] [CrossRef]
- Okada, T.; Linguraru, M.G.; Hori, M.; Summers, R.M.; Tomiyama, N.; Sato, Y. Abdominal multi-organ segmentation from ct images using conditional shape-location and unsupervised intensity priors. Med. Image Anal. 2015, 26, 1–18. [Google Scholar] [CrossRef] [Green Version]
- Wolz, R.; Chu, C.; Misawa, K.; Fujiwara, M.; Mori, K.; Rueckert, D. Automated abdominal multi-organ segmentation with subject-specific atlas generation. IEEE Trans. Med. Imaging 2013, 32, 1723–1730. [Google Scholar] [CrossRef]
- Wang, L.; Shi, F.; Lin, W.; Gilmore, J.H.; Shen, D. Automatic segmentation of neonatal images using convex optimization and coupled level sets. NeuroImage 2011, 58, 805–817. [Google Scholar] [CrossRef] [Green Version]
- Shi, F.; Fan, Y.; Tang, S.; Gilmore, J.H.; Lin, W.; Shen, D. Neonatal brain image segmentation in longitudinal mri studies. Neuroimage 2010, 49, 391–400. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Cardoso, M.J.; Melbourne, A.; Kendall, G.S.; Modat, M.; Robertson, N.J.; Marlow, N.; Ourselin, S. Adapt: An adaptive preterm segmentation algorithm for neonatal brain mri. NeuroImage 2013, 65, 97–108. [Google Scholar] [CrossRef]
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
- Gibson, E.; Giganti, F.; Hu, Y.; Bonmati, E.; Bandula, S.; Gurusamy, K.; Davidson, B.R.; Pereira, S.P.; Clarkson, M.; Barratt, D.C. Towards Image-Guided Pancreas and Biliary Endoscopy: Automatic Multi-organ Segmentation on Abdominal CT with Dense Dilated Networks. In Transactions on Petri Nets and Other Models of Concurrency XV.; Springer: Berlin/Heidelberg, Germany, 2017; Volume 10433, pp. 728–736. [Google Scholar]
- Hu, P.; Wu, F.; Peng, J.; Bao, Y.; Chen, F.; Kong, D. Automatic abdominal multi-organ segmentation using deep convolutional neural network and time-implicit level sets. Int. J. Comput. Assist. Radiol. Surg. 2017, 12, 399–411. [Google Scholar] [CrossRef] [PubMed]
- Zhou, X.; Ito, T.; Takayama, R.; Wang, S.; Hara, T.; Fujita, H. Three dimensional ct image segmentation by combining 2d fully convolutional network with 3d majority voting. In Deep Learning and Data Labeling for Medical Applications; Springer: Berlin/Heidelberg, Germany, 2016; pp. 111–120. [Google Scholar]
- Roth, H.R.; Oda, H.; Hayashi, Y.; Oda, M.; Shimizu, N.; Fujiwara, M.; Misawa, K.; Mori, K. Hierarchical 3d fully convolutional networks for multi-organ segmentation. arXiv 2017, arXiv:1704.06382. Available online: https://arxiv.org/abs/1704.06382 (accessed on 28 May 2021).
- Zhou, X.; Takayama, R.; Wang, S.; Hara, T.; Fujita, H. Deep learning of the sectional appearances of 3d ct images for anatomical structure segmentation based on an fcn voting method. Med. Phys. 2017, 44, 5221–5233. [Google Scholar] [CrossRef] [PubMed]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention MICCAI, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
- Zeiler, M.D.; Fergus, R. Visualizing and understanding convolutional networks. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; pp. 818–833. [Google Scholar]
- Zeng, G.; Yang, X.; Li, J.; Yu, L.; Heng, P.A.; Zheng, G. 3d U-Net with multi level deep supervision: Fully automatic segmentation of proximal femur in 3d mr images. In Proceedings of the International Workshop on Machine Learning in Medical Imaging, Quebec City, QC, Canada, 10 September 2017; pp. 274–282. [Google Scholar]
- Gordienko, Y.; Gang, P.; Hui, J.; Zeng, W.; Kochura, Y.; Alienin, O.; Rokovyi, O.; Stirenko, S. Deep learning with lung segmentation and bone shadow exclusion techniques for chest x-ray analysis of lung cancer. In Proceedings of the International Conference on Computer Science, Engineering and Education Applications, Kiev, Ukraine, 18–20 January 2018; pp. 638–647. [Google Scholar]
- Kleesiek, J.; Urban, G.; Hubert, A.; Schwarz, D.; Maier-Hein, K.; Bendszus, M.; Biller, A. Deep mri brain extraction: A 3d convolutional neural network for skull stripping. NeuroImage 2016, 129, 460–469. [Google Scholar] [CrossRef]
- Isola, P.; Zhu, J.Y.; Zhou, T.; Efros, A.A. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition CVPR, Honolulu, HI, USA, 21–26 July 2017; pp. 1125–1134. [Google Scholar]
- Milletari, F.; Navab, N.; Ahmadi, S.A. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In Proceedings of the Fourth International Conference on 3D Vision 3DV, Stanford, CA, USA, 25–28 October 2016; pp. 565–571. [Google Scholar]
- Badrinarayanan, V.; Kendall, A.; Cipolla, R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef]
- Pelt, D.M.; Sethian, J.A. A mixed-scale dense convolutional neural network for image analysis. Proc. Natl. Acad. Sci. USA 2018, 115, 254–259. [Google Scholar] [CrossRef] [Green Version]
- Rundo, L.; Han, C.; Nagano, Y.; Zhang, J.; Hataya, R.; Militello, C.; Tangherloni, A.; Nobile, M.S.; Ferretti, C.; Besozzi, D.; et al. USE-Net: Incorporating Squeeze-and-Excitation blocks into U-Net for prostate zonal segmentation of multi-institutional MRI datasets. Neurocomputing 2019, 365, 31–43. [Google Scholar] [CrossRef] [Green Version]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition CVPR, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
- Rundo, L.; Han, C.; Zhang, J.; Hataya, R.; Nagano, Y.; Militello, C.; Ferretti, C.; Nobile, M.S.; Tangherloni, A.; Gilardi, M.; et al. CNN-based Prostate Zonal Segmentation on T2-weighted MR Images: A Cross-dataset Study. In Neural Approaches to Dynamics of Signal Exchanges; Springer: Berlin/Heidelberg, Germany, 2020. [Google Scholar]
- Lachinov, D. Segmentation of thoracic organs using pixel shuffle. In Proceedings of the IEEE International Symposium on Biomedical Imaging ISBI, Venice, Italy, 8–11 April 2019. [Google Scholar]
- Zhang, L.; Wang, L.; Huang, Y.; Chen, H. Segmentation of thoracic organs at risk in ct images combining coarse and fine network. In Proceedings of the IEEE International Symposium on Biomedical Imaging ISBI, Venice, Italy, 8–11 April 2019. [Google Scholar]
- Chen, P.; Xu, C.; Li, X.; Ma, Y.; Sun, F. Two-stage network for oar segmentation. In Proceedings of the IEEE International Symposium on Biomedical Imaging ISBI, Venice, Italy, 8–11 April 2019. [Google Scholar]
- Vesal, S.; Ravikumar, N.; Maier, A. A 2d dilated residual U-Net for multi-organ segmentation in thoracic ct. arXiv 2019, arXiv:1905.07710. Available online: https://arxiv.org/abs/1905.07710 (accessed on 28 May 2021).
- Wang, Q.; Zhao, W.; Zhang, C.H.; Zhang, L.; Wang, C.; Li, Z.; Li, G. 3d enhanced multi-scale network for thoracic organs segmentation. In Proceedings of the IEEE International Symposium on Biomedical Imaging ISBI, Venice, Italy, 8–11 April 2019. [Google Scholar]
- He, T.; Hu, J.; Song, Y.; Guo, J.; Yi, Z. Multi-task learning for the segmentation of organs at risk with label dependence. Med. Image Anal. 2020, 61, 101666. [Google Scholar] [CrossRef]
- Han, M.; Yao, G.; Zhang, W.; Mu, G.; Zhan, Y.; Zhou, X.; Gao, Y. Segmentation of ct thoracic organs by multi-resolution vb-nets. In Proceedings of the IEEE International Symposium on Biomedical Imaging ISBI, Venice, Italy, 8–11 April 2019. [Google Scholar]
- Tappeiner, E.; Pröll, S.; Hönig, M.; Raudaschl, P.F.; Zaffino, P.; Spadea, M.F.; Gregory, C.S.; Rainer, S.; Fritscher, K. Multi-organ segmentation of the head and neck area: An efficient hierarchical neural networks approach. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 745–754. [Google Scholar] [CrossRef] [PubMed]
- Militello, C.; Rundo, L.; Toia, P.; Conti, V.; Russo, G.; Filorizzo, C.; Maffei, E.; Cademartiri, F.; Grutta, L.; Midiri, M.; et al. A semi-automatic approach for epicardial adipose tissue segmentation and quantification on cardiac CT scans. Comput. Biol. Med. 2019, 114, 103424. [Google Scholar] [CrossRef] [PubMed]
- Bai, J.W.; Li, P.A.; Wang, K.H. Automatic whole heart segmentation based on watershed and active contour model in CT images. In Proceedings of the IEEE International Conference on Computer Science and Network Technology ICCSNT, Changchun, China, 10–11 December 2016; pp. 741–744. [Google Scholar]
- Feulner, J.; Zhou, S.K.; Hammon, M.; Seifert, S.; Huber, M.; Conmaniciu, D.; Hernegger, J.; Cavallaro, A. A probabilistic model for automatic segmentation of the esophagus in 3-d ct scans. IEEE Trans. Med. Imaging 2011, 30, 1252–1264. [Google Scholar] [CrossRef]
- Grosgeorge, D.; Petitjean, C.; Dubray, B.; Ruan, S. Esophagus segmentation from 3d ct data using skeleton prior-based graph cut. Comput. Math. Methods Med. 2013. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Feng, X.; Qing, K.; Tustison, N.J.; Meyer, C.H.; Chen, Q. Deep convolutional neural network for segmentation of thoracic organs-at-risk using cropped 3D images. Med. Phys. 2019, 46, 2169–2180. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Warfield, S.K.; Zou, K.H.; Wells, W.M. Simultaneous truth and performance level estimation (staple): An algorithm for the validation of image segmentation. IEEE Trans Med. Imaging 2004, 23. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Itti, L.; Koch, C.; Niebur, E. A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 11, 1254–1259. [Google Scholar] [CrossRef] [Green Version]
- Rensink, R.A. The dynamic representation of scenes. Vis. Cogn. 2000, 7, 17–42. [Google Scholar] [CrossRef]
- Corbetta, M.; Shulman, G.L. Control of goal-directed and stimulus-driven attention in the brain. Nat. Rev. Neurosci. 2002, 3, 201–215. [Google Scholar] [CrossRef] [PubMed]
- Komodakis, N.; Zagoruyko, S. Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer. In Proceedings of the International Conference on Learning Representations ICLR, Toulon, France, 24–26 April 2017. [Google Scholar]
- Shao, Z.; Yang, K.; Zhou, W. Performance evaluation of single-label and multi-label remote sensing image retrieval using a dense labeling dataset. Remote Sens. 2018, 10, 964. [Google Scholar] [CrossRef] [Green Version]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition CVPR, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition CVPR, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
- Zaffino, P.; Pernelle, G.; Mastmeyer, A.; Mehrtash, A.; Zhang, H.; Kikinis, R.; Kapur, T.; Spadea, M.F. Fully automatic catheter segmentation in MRI with 3D convolutional neural networks: Application to MRI-guided gynecologic brachytherapy. Phys. Med. Biol. 2019, 64, 165008. [Google Scholar] [CrossRef]
- Hatt, M.; Laurent, B.; Ouahabi, A.; Fayad, H.; Tan, S.; Li, L.; Lu, W.; Jaouen, V.; Tauber, C.; Czakon, J.; et al. The first MICCAI challenge on PET tumor segmentation. Med. Image Anal. 2018, 44, 177–195. [Google Scholar] [CrossRef] [Green Version]
- Dewalle-Vignion, A.S.; Betrouni, N.; Baillet, C.; Vermandel, M. Is STAPLE algorithm confident to assess segmentation methods in PET imaging? Phys. Med. Biol. 2015, 60, 9473. [Google Scholar] [CrossRef]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. Available online: https://arxiv.org/abs/1412.6980 (accessed on 28 May 2021).
- Roy, A.G.; Navab, N.; Wachinger, C. Recalibrating fully convolutional networks with spatial and channel “squeeze and excitation” blocks. IEEE Trans. Med. Imaging 2018, 38, 540–549. [Google Scholar] [CrossRef]
- Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision ECCV, Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
- Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv 2015, arXiv:1502.03167. Available online: https://arxiv.org/abs/1502.03167 (accessed on 28 May 2021).
Method | Dice | HD |
---|---|---|
Lachinov et al. (2019) [40] | 0.8303 | - |
Zhang et al. (2019) [41] | 0.7732 | 1.6774 |
Chen et al. (2019) [42] | 0.8166 | 0.4914 |
Vesal et al. (2019) [43] | 0.8580 | 0.3310 |
Wang et al. (2019) [44] | 0.8597 | 0.2883 |
He et al. (2020) [45] | 0.8594 | 0.2743 |
Han et al. (2019) [46] | 0.8651 | 0.2590 |
U-Net-scse-seresnext50 | 0.8479 | 0.3414 |
U-Net-no_att-resnet34 | 0.8381 | 0.3754 |
U-Net-no_att-se_resnext50 | 0.8469 | 0.3652 |
Ours | 0.8690 | 0.2527 |
Method | Dice | HD95 |
---|---|---|
MTL-WMCE [45] | 0.6055 | 28.96 |
U-Net-cbam-resnet34 | 0.7490 | 17.68 |
U-Net-cbam-seresnext50 | 0.7590 | 17.97 |
U-Net-scse-resnet34 | 0.7606 | 19.90 |
U-Net-scse-seresnext50 | 0.7762 | 11.31 |
U-Net-no_att-resnet34 | 0.7575 | 12.43 |
U-Net-no_att-se_resnext50 | 0.7705 | 14.39 |
Ours | 0.7784 | 11.28 |
Normalization Technique | Dice | HD |
---|---|---|
BN | 0.8667 | 0.2748 |
GN | 0.8690 | 0.2527 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Tran, M.-T.; Kim, S.-H.; Yang, H.-J.; Lee, G.-S.; Oh, I.-J.; Kang, S.-R. Esophagus Segmentation in CT Images via Spatial Attention Network and STAPLE Algorithm. Sensors 2021, 21, 4556. https://doi.org/10.3390/s21134556
Tran M-T, Kim S-H, Yang H-J, Lee G-S, Oh I-J, Kang S-R. Esophagus Segmentation in CT Images via Spatial Attention Network and STAPLE Algorithm. Sensors. 2021; 21(13):4556. https://doi.org/10.3390/s21134556
Chicago/Turabian StyleTran, Minh-Trieu, Soo-Hyung Kim, Hyung-Jeong Yang, Guee-Sang Lee, In-Jae Oh, and Sae-Ryung Kang. 2021. "Esophagus Segmentation in CT Images via Spatial Attention Network and STAPLE Algorithm" Sensors 21, no. 13: 4556. https://doi.org/10.3390/s21134556
APA StyleTran, M. -T., Kim, S. -H., Yang, H. -J., Lee, G. -S., Oh, I. -J., & Kang, S. -R. (2021). Esophagus Segmentation in CT Images via Spatial Attention Network and STAPLE Algorithm. Sensors, 21(13), 4556. https://doi.org/10.3390/s21134556