Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

NAG-Net: : Nested attention-guided learning for segmentation of carotid lumen-intima interface and media-adventitia interface

Published: 01 April 2023 Publication History

Abstract

Cardiovascular diseases (CVD), as the leading cause of death in the world, poses a serious threat to human health. The segmentation of carotid Lumen-intima interface (LII) and Media-adventitia interface (MAI) is a prerequisite for measuring intima-media thickness (IMT), which is of great significance for early screening and prevention of CVD. Despite recent advances, existing methods still fail to incorporate task-related clinical domain knowledge and require complex post-processing steps to obtain fine contours of LII and MAI. In this paper, a nested attention-guided deep learning model (named NAG-Net) is proposed for accurate segmentation of LII and MAI. The NAG-Net consists of two nested sub-networks, the Intima-Media Region Segmentation Network (IMRSN) and the LII and MAI Segmentation Network (LII-MAISN). It innovatively incorporates task-related clinical domain knowledge through the visual attention map generated by IMRSN, enabling LII-MAISN to focus more on the clinician’s visual focus region under the same task during segmentation. Moreover, the segmentation results can directly obtain fine contours of LII and MAI through simple refinement without complicated post-processing steps. To further improve the feature extraction ability of the model and reduce the impact of data scarcity, the strategy of transfer learning is also adopted to apply the pretrained weights of VGG-16. In addition, a channel attention-based encoder feature fusion block (EFFB-ATT) is specially designed to achieve efficient representation of useful features extracted by two parallel encoders in LII-MAISN. Extensive experimental results have demonstrated that our proposed NAG-Net outperformed other state-of-the-art methods and achieved the highest performance on all evaluation metrics.

Highlights

We Propose a novel nested DCNN model for accurate segmentation of LII and MAI.
Task-related clinical domain knowledge was incorporated as segmentation prior.
An attention-based module is designed to efficiently use the prior knowledge.
Transfer learning strategy is used to further improve the model performance.

References

[1]
World Health Organization, et al., World health statistics 2021: monitoring health for the SDGs, sustainable development goals, 2021.
[2]
Simonyan K., Zisserman A., Very deep convolutional networks for large-scale image recognition, 2014, arXiv preprint arXiv:1409.1556.
[3]
Torrey L., Shavlik J., Transfer learning, in: Handbook of Research on Machine Learning Applications and Trends: Algorithms, Methods, and Techniques, IGI global, 2010, pp. 242–264.
[4]
Golemati S., Stoitsis J., Sifakis E.G., Balkizas T., Nikita K.S., Using the hough transform to segment ultrasound images of longitudinal and transverse sections of the carotid artery, Ultrasound Med. Biol. 33 (12) (2007) 1918–1932.
[5]
Loizou C.P., Nicolaides A., Kyriacou E., Georghiou N., Griffin M., Pattichis C.S., A comparison of ultrasound intima-media thickness measurements of the left and right common carotid artery, IEEE J. Transl. Eng. Health Med. 3 (2015) 1–10.
[6]
Rocha R., Campilho A., Silva J., Azevedo E., Santos R., Segmentation of the carotid intima-media region in B-mode ultrasound images, Image Vis. Comput. 28 (4) (2010) 614–625.
[7]
Zhou Y., Cheng X., Xu X., Song E., Dynamic programming in parallel boundary detection with application to ultrasound intima-media segmentation, Med. Image Anal. 17 (8) (2013) 892–906,.
[8]
Molinari F., Pattichis C.S., Zeng G., Saba L., Acharya U.R., Sanfilippo R., Nicolaides A., Suri J.S., Completely automated multiresolution edge snapper—a new technique for an accurate carotid ultrasound IMT measurement: clinical validation and benchmarking on a multi-institutional database, IEEE Trans. Image Process. 21 (3) (2011) 1211–1222.
[9]
Faita F., Gemignani V., Bianchini E., Giannarelli C., Ghiadoni L., Demi M., Real-time measurement system for evaluation of the carotid intima-media thickness with a robust edge operator, J. Ultrasound Med. 27 (9) (2008) 1353–1361.
[10]
Loizou C.P., Pattichis C.S., Pantziaris M., Tyllis T., Nicolaides A., Snakes based segmentation of the common carotid artery intima media, Med. Biol. Eng. Comput. 45 (1) (2007) 35–49.
[11]
Petroudi S., Loizou C., Pantziaris M., Pattichis C., Segmentation of the common carotid intima-media complex in ultrasound images using active contours, IEEE Trans. Biomed. Eng. 59 (11) (2012) 3060–3069.
[12]
Zhao S., Gao Z., Zhang H., Xie Y., Luo J., Ghista D., Wei Z., Bi X., Xiong H., Xu C., et al., Robust segmentation of intima–media borders with different morphologies and dynamics during the cardiac cycle, IEEE J. Biomed. Health Inf. 22 (5) (2017) 1571–1582.
[13]
Li H., Zhang S., Ma R., Chen H., Xi S., Zhang J., Fang J., Ultrasound intima-media thickness measurement of the carotid artery using ant colony optimization combined with a curvelet-based orientation-selective filter, Med. Phys. 43 (4) (2016) 1795–1807.
[14]
Nagaraj Y., Madipalli P., Rajan J., Kumar P.K., Narasimhadhan A.V., Segmentation of intima media complex from carotid ultrasound images using wind driven optimization technique, Biomed. Signal Process. Control 40 (2018) 462–472.
[15]
Madipalli P., Kotta S., Dadi H., Nagaraj Y., Asha C., Narasimhadhan A., Automatic segmentation of intima media complex in common carotid artery using adaptive wind driven optimization, in: 2018 Twenty Fourth National Conference on Communications, NCC, IEEE, 2018, pp. 1–6.
[16]
Menchón-Lara R.-M., Bastida-Jumilla M.-C., Morales-Sánchez J., Sancho-Gómez J.-L., Automatic detection of the intima-media thickness in ultrasound images of the common carotid artery using neural networks, Med. Biol. Eng. Comput. 52 (2) (2014) 169–181.
[17]
Menchón-Lara R.-M., Sancho-Gómez J.-L., Fully automatic segmentation of ultrasound common carotid artery images based on machine learning, Neurocomputing 151 (2015) 161–167.
[18]
Nagaraj Y., Hema Sai Teja A., Narasimhadhan A., Automatic segmentation of intima media complex in carotid ultrasound images using support vector machine, Arab. J. Sci. Eng. 44 (4) (2019) 3489–3496.
[19]
J. Shin, N. Tajbakhsh, R.T. Hurst, C.B. Kendall, J. Liang, Automating carotid intima-media thickness video interpretation with convolutional neural networks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2526–2535.
[20]
Biswas M., Kuppili V., Araki T., Edla D.R., Godia E.C., Saba L., Suri H.S., Omerzu T., Laird J.R., Khanna N.N., et al., Deep learning strategy for accurate carotid intima-media thickness measurement: an ultrasound study on Japanese diabetic cohort, Comput. Biol. Med. 98 (2018) 100–117.
[21]
del Mar Vila M., Remeseiro B., Grau M., Elosua R., Betriu A., Fernandez-Giraldez E., Igual L., Semantic segmentation with DenseNets for carotid artery ultrasound plaque segmentation and CIMT estimation, Artif. Intell. Med. 103 (2020).
[22]
Biswas M., Saba L., Chakrabartty S., Khanna N.N., Song H., Suri H.S., Sfikakis P.P., Mavrogeni S., Viskovic K., Laird J.R., et al., Two-stage artificial intelligence model for jointly measurement of atherosclerotic wall thickness and plaque burden in carotid ultrasound: A screening tool for cardiovascular/stroke risk assessment, Comput. Biol. Med. 123 (2020).
[23]
Mi S., Wei Z., Xu J., Yu Z., Yang W., Liao Q., Detecting carotid intima-media from small-sample ultrasound images, in: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society, EMBC, IEEE, 2020, pp. 2129–2132.
[24]
Lin Y., Huang J., Chen Y., Chen Q., Li Z., Cao Q., Intelligent segmentation of intima–media and plaque recognition in carotid artery ultrasound images, Ultrasound Med. Biol. 48 (3) (2022) 469–479.
[25]
Lian S., Luo Z., Feng C., Li S., Li S., APRIL: Anatomical prior-guided reinforcement learning for accurate carotid lumen diameter and intima-media thickness measurement, Med. Image Anal. 71 (2021).
[26]
Biswas M., Saba L., Omerzu T., Johri A.M., Khanna N.N., Viskovic K., Mavrogeni S., Laird J.R., Pareek G., Miner M., et al., A review on joint carotid intima-media thickness and plaque area measurement in ultrasound for cardiovascular/stroke risk monitoring: Artificial intelligence framework, J. Digit. Imaging 34 (3) (2021) 581–604.
[27]
Bayraktar Z., Komurcu M., Bossard J.A., Werner D.H., The wind driven optimization technique and its application in electromagnetics, IEEE Trans. Antennas and Propagation 61 (5) (2013) 2745–2757.
[28]
Z. Bayraktar, M. Komurcu, Adaptive wind driven optimization, in: Proceedings of the 9th EAI International Conference on Bio-Inspired Information and Communications Technologies (Formerly BIONETICS), 2016, pp. 124–127.
[29]
Han J., Pei J., Kamber M., Data Mining: Concepts and Techniques, Elsevier, 2011.
[30]
LeCun Y., Bengio Y., Hinton G., Deep learning, Nature 521 (7553) (2015) 436–444.
[31]
Wiering M.A., Van Otterlo M., Reinforcement learning, Adapt. Learn. Optim. 12 (3) (2012) 729.
[32]
Nosrati M.S., Hamarneh G., Incorporating prior knowledge in medical image segmentation: a survey, 2016, arXiv preprint arXiv:1607.01092.
[33]
Ronneberger O., Fischer P., Brox T., U-net: Convolutional networks for biomedical image segmentation, in: International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, 2015, pp. 234–241.
[34]
Deng J., Dong W., Socher R., Li L.-J., Li K., Fei-Fei L., Imagenet: A large-scale hierarchical image database, in: 2009 IEEE Conference on Computer Vision and Pattern Recognition, Ieee, 2009, pp. 248–255.
[35]
Paszke A., Gross S., Chintala S., Chanan G., Yang E., DeVito Z., Lin Z., Desmaison A., Antiga L., Lerer A., Automatic differentiation in pytorch, 2017.
[36]
Kingma D.P., Ba J., Adam: A method for stochastic optimization, 2014, arXiv preprint arXiv:1412.6980.
[37]
Van Dyk D.A., Meng X.-L., The art of data augmentation, J. Comput. Graph. Statist. 10 (1) (2001) 1–50.
[38]
S. Jégou, M. Drozdzal, D. Vazquez, A. Romero, Y. Bengio, The one hundred layers tiramisu: Fully convolutional densenets for semantic segmentation, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2017, pp. 11–19.
[39]
H. Zhao, J. Shi, X. Qi, X. Wang, J. Jia, Pyramid scene parsing network, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 2881–2890.
[40]
L.-C. Chen, Y. Zhu, G. Papandreou, F. Schroff, H. Adam, Encoder-decoder with atrous separable convolution for semantic image segmentation, in: Proceedings of the European Conference on Computer Vision, ECCV, 2018, pp. 801–818.
[41]
K. Sun, B. Xiao, D. Liu, J. Wang, Deep high-resolution representation learning for human pose estimation, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 5693–5703.
[42]
Zhou Z., Rahman Siddiquee M.M., Tajbakhsh N., Liang J., Unet++: A nested u-net architecture for medical image segmentation, in: Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, Springer, 2018, pp. 3–11.
[43]
Oktay O., Schlemper J., Folgoc L.L., Lee M., Heinrich M., Misawa K., Mori K., McDonagh S., Hammerla N.Y., Kainz B., et al., Attention u-net: Learning where to look for the pancreas, 2018, arXiv preprint arXiv:1804.03999.
[44]
Isensee F., Jaeger P.F., Kohl S.A., Petersen J., Maier-Hein K.H., nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation, Nature Methods 18 (2) (2021) 203–211.
[45]
Cao H., Wang Y., Chen J., Jiang D., Zhang X., Tian Q., Wang M., Swin-unet: Unet-like pure transformer for medical image segmentation, 2021, arXiv preprint arXiv:2105.05537.
[46]
Gao Y., Zhou M., Metaxas D.N., Utnet: a hybrid transformer architecture for medical image segmentation, in: International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, 2021, pp. 61–71.
[47]
Escapa Farrés A., Piñol Bunthama J., Sarlé i Vallés E., Deep learning for the detection and characterization of the carotid artery in ultrasound imaging, 2018.
[48]
Jain P.K., Sharma N., Saba L., Paraskevas K.I., Kalra M.K., Johri A., Laird J.R., Nicolaides A.N., Suri J.S., Unseen artificial intelligence—Deep learning paradigm for segmentation of low atherosclerotic plaque in carotid ultrasound: A multicenter cardiovascular study, Diagnostics 11 (12) (2021) 2257.
[49]
R.R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, D. Batra, Grad-cam: Visual explanations from deep networks via gradient-based localization, in: Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 618–626.
[50]
J. Hu, L. Shen, G. Sun, Squeeze-and-excitation networks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 7132–7141.
[51]
W. Qilong, W. Banggu, Z. Pengfei, L. Peihua, Z. Wangmeng, H. Qinghua, ECA-Net: efficient channel attention for deep convolutional neural networks 2020 IEEE, in: CVF Conference on Computer Vision and Pattern Recognition, CVPR, 2020.
[52]
T.-W. Hui, X. Tang, C.C. Loy, Liteflownet: A lightweight convolutional neural network for optical flow estimation, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 8981–8989.
[53]
Parra-Mora E., da Silva Cruz L.A., Loctseg: A lightweight fully convolutional network for end-to-end optical coherence tomography segmentation, Comput. Biol. Med. 150 (2022),. URL https://www.sciencedirect.com/science/article/pii/S0010482522008824.
[54]
Liu Y., Shen J., Yang L., Yu H., Bian G., Wave-net: A lightweight deep network for retinal vessel segmentation from fundus images, Comput. Biol. Med. 152 (2023),. URL https://www.sciencedirect.com/science/article/pii/S0010482522010496.
[55]
D. Bolya, C. Zhou, F. Xiao, Y.J. Lee, Yolact: Real-time instance segmentation, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 9157–9166.
[56]
Huang Q., Ye L., Multi-task/single-task joint learning of ultrasound BI-RADS features, IEEE Trans. Ultrason. Ferroelectr. Freq. Control 69 (2) (2021) 691–701.
[57]
Huang Q., Huang Y., Luo Y., Yuan F., Li X., Segmentation of breast ultrasound image with semantic classification of superpixels, Med. Image Anal. 61 (2020).
[58]
Huang Q., Miao Z., Zhou S., Chang C., Li X., Dense prediction and local fusion of superpixels: A framework for breast anatomy segmentation in ultrasound image with scarce data, IEEE Trans. Instrum. Meas. 70 (2021) 1–8.
[59]
Gunning D., Stefik M., Choi J., Miller T., Stumpf S., Yang G.-Z., XAI—Explainable artificial intelligence, Science Robotics 4 (37) (2019) eaay7120.
[60]
Yang C., Jiang M., Chen M., Fu M., Li J., Huang Q., Automatic 3-D imaging and measurement of human spines with a robotic ultrasound system, IEEE Trans. Instrum. Meas. 70 (2021) 1–13.
[61]
Huang Q., Luo H., Yang C., Li J., Deng Q., Liu P., Fu M., Li L., Li X., Anatomical prior based vertebra modelling for reappearance of human spines, Neurocomputing 500 (2022) 750–760.
[62]
Luo Y., Huang Q., Li X., Segmentation information with attention integration for classification of breast tumor in ultrasound image, Pattern Recognit. 124 (2022).
[63]
Zhang J., Qin Q., Ye Q., Ruan T., ST-unet: Swin transformer boosted U-net with cross-layer feature enhancement for medical image segmentation, Comput. Biol. Med. 153 (2023),. URL https://www.sciencedirect.com/science/article/pii/S0010482522012240.
[64]
Zhang Y.-D., Dong Z., Wang S.-H., Yu X., Yao X., Zhou Q., Hu H., Li M., Jiménez-Mesa C., Ramirez J., Martinez F.J., Gorriz J.M., Advances in multimodal data fusion in neuroimaging: Overview, challenges, and novel orientation, Inf. Fusion 64 (2020) 149–187,. URL https://www.sciencedirect.com/science/article/pii/S1566253520303183.
[65]
Wang S.-H., Govindaraj V.V., Górriz J.M., Zhang X., Zhang Y.-D., Covid-19 classification by FGCNet with deep feature fusion from graph convolutional network and convolutional neural network, Inf. Fusion 67 (2021) 208–229.
[66]
Wang S.-H., Nayak D.R., Guttery D.S., Zhang X., Zhang Y.-D., COVID-19 classification by CCSHNet with deep fusion using transfer learning and discriminant correlation analysis, Inf. Fusion 68 (2021) 131–148.

Cited By

View all

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Computers in Biology and Medicine
Computers in Biology and Medicine  Volume 156, Issue C
Apr 2023
226 pages

Publisher

Pergamon Press, Inc.

United States

Publication History

Published: 01 April 2023

Author Tags

  1. Medical image segmentation
  2. LII and MAI segmentation
  3. Attention mechanism
  4. Asymmetric encoder–decoder architecture

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 13 Nov 2024

Other Metrics

Citations

Cited By

View all

View Options

View options

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media