Retinal OCTA Image Segmentation Based on Global Contrastive Learning
<p>COSNet architecture.</p> "> Figure 2
<p>ResNeSt block (only two branches are shown for brevity).</p> "> Figure 3
<p>Split-attention block (only two branches are shown for brevity).</p> "> Figure 4
<p>Fine-tune block.</p> "> Figure 5
<p>Original image (<b>a</b>) and enhanced image (<b>b</b>).</p> "> Figure 6
<p>ROC curves for ROSE-1 (<b>left</b>) and ROSE-2 (<b>right</b>).</p> "> Figure 7
<p>Segmentation results for ROSE-1.</p> "> Figure 8
<p>Segmentation results for OCTA-500 (ILM-OPL).</p> ">
Abstract
:1. Introduction
- A two-branch contrastive learning network for retinal OCTA image segmentation is proposed. The model is able to effectively extract features of vascular images by learning superficial and deep vascular annotations while avoiding the feature vanishing of deep vessels. A segmentation head and a projection head are added at the end of the decoder to obtain both segmentation mapping and pixel embedding;
- In this paper, a new pixel contrastive loss function is proposed. By saving same-class pixel queues and pixel embeddings in memory bank, features within a single image can be learned as well as same-class features in the whole dataset. The network model is guaranteed to learn more hard-to-segment samples, thus alleviating the class imbalance problem and improving the segmentation performance;
- A contrast-limited adaptive histogram equalization (CLANE) method with fixed area is used for retinal OCTA images to mitigate noise caused by imaging artifacts.
2. Methods and Theory
2.1. Feature Extraction Module
2.2. Contrastive Learning Module
2.3. Fine-Tune Module
3. Experiments and Results
3.1. Experimental Configuration
3.1.1. Dataset and Augmentation
3.1.2. Development Environment, Parameter Configuration, and Evaluation Metrics
3.2. Comparison Test
3.3. Ablation Study
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
OCTA | Optical coherence tomography angiograph |
MLP | Multilayer perceptron |
CLANE | Contrast-limited adaptive histogram equalization |
COSNet | Contrastive OCTA segmentation net |
MSE | Mean-squared error |
MoCo | Momentum contrast |
SVC | Superficial vascular complexes |
DVC | Deep vascular complexes |
AUC | Area under curve |
ACC | Accuracy |
TP | True positive |
TN | True negative |
FP | False positive |
FN | False negative |
ILM | Internal limiting membrane |
OPL | Outer plexiform layer |
References
- Akil, H.; Huang, A.S.; Francis, B.A.; Sadda, S.R.; Chopra, V. Retinal vessel density from optical coherence tomography angiography to differentiate early glaucoma, pre-perimetric glaucoma and normal eyes. PLoS ONE 2017, 12, e0170476. [Google Scholar] [CrossRef] [PubMed]
- Zhao, Y.; Zheng, Y.; Liu, Y.; Yang, J.; Zhao, Y.; Chen, D.; Wang, Y. Intensity and compactness enabled saliency estimation for leakage detection in diabetic and malarial retinopathy. IEEE Trans. Med. Imaging 2016, 36, 51–63. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Carnevali, A.; Mastropasqua, R.; Gatti, V.; Vaccaro, S.; Mancini, A.; D’aloisio, R.; Lupidi, M.; Cerquaglia, A.; Sacconi, R.; Borrelli, E.; et al. Optical coherence tomography angiography in intermediate and late age-related macular degeneration: Review of current technical aspects and applications. Appl. Sci. 2020, 10, 8865. [Google Scholar] [CrossRef]
- López-Cuenca, I.; Salobrar-García, E.; Gil-Salgado, I.; Sánchez-Puebla, L.; Elvira-Hurtado, L.; Fernández-Albarral, J.A.; Ramírez-Toraño, F.; Barabash, A.; de Frutos-Lucas, J.; Salazar, J.J.; et al. Characterization of Retinal Drusen in Subjects at High Genetic Risk of Developing Sporadic Alzheimer’s Disease: An Exploratory Analysis. J. Pers. Med. 2022, 12, 847. [Google Scholar] [CrossRef] [PubMed]
- Li, Y.; Wang, X.; Zhang, Y.; Zhang, P.; He, C.; Li, R.; Wang, L.; Zhang, H.; Zhang, Y. Retinal microvascular impairment in Parkinson’s disease with cognitive dysfunction. Park. Relat. Disord. 2022, 98, 27–31. [Google Scholar] [CrossRef] [PubMed]
- Azzopardi, G.; Strisciuglio, N.; Vento, M.; Petkov, N. Trainable COSFIRE filters for vessel delineation with application to retinal images. Med. Image Anal. 2015, 19, 46–57. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Liu, X.; Deng, Z.; Yang, Y. Recent progress in semantic image segmentation. Artif. Intell. Rev. 2019, 52, 1089–1106. [Google Scholar] [CrossRef] [Green Version]
- Zhou, S.K.; Greenspan, H.; Davatzikos, C.; Duncan, J.S.; Van Ginneken, B.; Madabhushi, A.; Prince, J.L.; Rueckert, D.; Summers, R.M. A review of deep learning in medical imaging: Imaging traits, technology trends, case studies with progress highlights, and future promises. Proc. IEEE 2021, 109, 820–838. [Google Scholar] [CrossRef]
- Badar, M.; Haris, M.; Fatima, A. Application of deep learning for retinal image analysis: A review. Comput. Sci. Rev. 2020, 35, 100203. [Google Scholar] [CrossRef]
- Serte, S.; Serener, A.; Al-Turjman, F. Deep learning in medical imaging: A brief review. Trans. Emerg. Telecommun. Technol. 2022, 33, e4080. [Google Scholar] [CrossRef]
- Suganyadevi, S.; Seethalakshmi, V.; Balasamy, K. A review on deep learning in medical image analysis. Int. J. Multimed. Inf. Retr. 2022, 11, 19–38. [Google Scholar] [CrossRef]
- Zheng, W.; Liu, X.; Yin, L. Research on image classification method based on improved multi-scale relational network. PeerJ Comput. Sci. 2021, 7, e613. [Google Scholar] [CrossRef] [PubMed]
- Alzubaidi, L.; Fadhel, M.A.; Al-Shamma, O.; Zhang, J.; Santamaría, J.; Duan, Y. Robust application of new deep learning tools: An experimental study in medical imaging. Multimed. Tools Appl. 2022, 81, 13289–13317. [Google Scholar] [CrossRef]
- Staal, J.; Abràmoff, M.D.; Niemeijer, M.; Viergever, M.A.; Van Ginneken, B. Ridge-based vessel segmentation in color images of the retina. IEEE Trans. Med. Imaging 2004, 23, 501–509. [Google Scholar] [CrossRef] [PubMed]
- Fraz, M.M.; Remagnino, P.; Hoppe, A.; Uyyanonvara, B.; Rudnicka, A.R.; Owen, C.G.; Barman, S.A. An ensemble classification-based approach applied to retinal blood vessel segmentation. IEEE Trans. Biomed. Eng. 2012, 59, 2538–2548. [Google Scholar] [CrossRef] [PubMed]
- Li, M.; Chen, Y.; Ji, Z.; Xie, K.; Yuan, S.; Chen, Q.; Li, S. Image projection network: 3D to 2D image segmentation in OCTA images. IEEE Trans. Med. Imaging 2020, 39, 3343–3354. [Google Scholar] [CrossRef] [PubMed]
- Ma, Y.; Hao, H.; Xie, J.; Fu, H.; Zhang, J.; Yang, J.; Wang, Z.; Liu, J.; Zheng, Y.; Zhao, Y. ROSE: A retinal OCT-angiography vessel segmentation dataset and new model. IEEE Trans. Med. Imaging 2020, 40, 928–939. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; Springer: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
- Huang, H.; Lin, L.; Tong, R.; Hu, H.; Zhang, Q.; Iwamoto, Y.; Han, X.; Chen, Y.W.; Wu, J. Unet 3+: A full-scale connected unet for medical image segmentation. In Proceedings of the 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2020), Barcelona, Spain, 4–8 May 2020; pp. 1055–1059. [Google Scholar]
- Isensee, F.; Jaeger, P.F.; Kohl, S.A.; Petersen, J.; Maier-Hein, K.H. nnU-Net: A self-configuring method for deep learning-based biomedical image segmentation. Nat. Methods 2021, 18, 203–211. [Google Scholar] [CrossRef]
- Tomar, N.K.; Jha, D.; Riegler, M.A.; Johansen, H.D.; Johansen, D.; Rittscher, J.; Halvorsen, P.; Ali, S. Fanet: A feedback attention network for improved biomedical image segmentation. arXiv 2022, arXiv:2103.17235. [Google Scholar] [CrossRef]
- Peng, J.; Wang, P.; Desrosiers, C.; Pedersoli, M. Self-paced contrastive learning for semi-supervised medical image segmentation with meta-labels. Adv. Neural Inf. Process. Syst. 2021, 34, 16686–16699. [Google Scholar]
- Zhao, X.; Vemulapalli, R.; Mansfield, P.A.; Gong, B.; Green, B.; Shapira, L.; Wu, Y. Contrastive learning for label efficient semantic segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 10623–10633. [Google Scholar]
- Zhong, Y.; Yuan, B.; Wu, H.; Yuan, Z.; Peng, J.; Wang, Y.X. Pixel contrastive-consistent semi-supervised semantic segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 7273–7282. [Google Scholar]
- Alonso, I.; Sabater, A.; Ferstl, D.; Montesano, L.; Murillo, A.C. Semi-supervised semantic segmentation with pixel-level contrastive learning from a class-wise memory bank. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 8219–8228. [Google Scholar]
- Zhang, H.; Wu, C.; Zhang, Z.; Zhu, Y.; Lin, H.; Zhang, Z.; Sun, Y.; He, T.; Mueller, J.; Manmatha, R.; et al. Resnest: Split-attention networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 2736–2746. [Google Scholar]
- He, K.; Fan, H.; Wu, Y.; Xie, S.; Girshick, R. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 9729–9738. [Google Scholar]
- Kalantidis, Y.; Sariyildiz, M.B.; Pion, N.; Weinzaepfel, P.; Larlus, D. Hard negative mixing for contrastive learning. Adv. Neural Inf. Process. Syst. 2020, 33, 21798–21809. [Google Scholar]
- Li, M.; Huang, K.; Xu, Q.; Yang, J.; Zhang, Y.; Ji, Z.; Xie, K.; Yuan, S.; Liu, Q.; Chen, Q. OCTA-500: A Retinal Dataset for Optical Coherence Tomography Angiography Study. arXiv 2020, arXiv:2012.07261. [Google Scholar] [CrossRef]
- Li, Y.; Zeghlache, R.; Brahim, I.; Xu, H.; Tan, Y.; Conze, P.H.; Lamard, M.; Quellec, G.; Daho, M.E.H. Segmentation, Classification, and Quality Assessment of UW-OCTA Images for the Diagnosis of Diabetic Retinopathy. arXiv 2022, arXiv:2211.11509. [Google Scholar] [CrossRef]
- Yuan, M.; Wang, W.; Kang, S.; Li, Y.; Li, W.; Gong, X.; Xiong, K.; Meng, J.; Zhong, P.; Guo, X.; et al. Peripapillary Microvasculature Predicts the Incidence and Development of Diabetic Retinopathy: An SS-OCTA Study. Am. J. Ophthalmol. 2022, 243, 19–27. [Google Scholar] [CrossRef] [PubMed]
- Turker, I.C.; Dogan, C.U.; Dirim, A.B.; Guven, D.; Kutucu, O.K. Evaluation of early and late COVID-19-induced vascular changes with OCTA. Can. J. Ophthalmol. 2022, 57, 236–241. [Google Scholar] [CrossRef] [PubMed]
ROSE-1 | ROSE-2 | |
---|---|---|
Acquisition device | Optovue, USA | Heidelberg OCT2, Germany |
Number | 117 | 112 |
Resolution | 304 × 304 | 512 × 512 |
Image type | SVC, DVC, SVC + DVC 1 | SVC |
Annotation type | pixel and centerline level | centerline level |
Disease type | Alzheimer’s disease, macular degeneration, glaucoma, etc. | macular degeneration |
OCTA-3M | OCTA-6M | |
---|---|---|
Number | 200 | 300 |
Resolution | 304 × 304 | 400 × 400 |
Image type | SVC | SVC |
Annotation type | pixel level | pixel level |
Method | AUC | ACC | G-Mean | Kappa | Dice | FDR |
---|---|---|---|---|---|---|
COSFIRE [6] | 0.8764 | 0.8978 | 0.7253 | 0.6125 | 0.6673 | 0.0985 |
U-Net [18] | 0.9028 | 0.8859 | 0.8038 | 0.6310 | 0.7015 | 0.2889 |
nnU-Net [20] | 0.9109 | 0.8996 | 0.8185 | 0.6687 | 0.7311 | 0.2253 |
FANet [21] | 0.9285 | 0.9057 | 0.8223 | 0.6815 | 0.7406 | 0.2230 |
OCTA-Net [17] | 0.9371 | 0.9098 | 0.8335 | 0.7022 | 0.7570 | 0.2045 |
COSNet (Ours) | 0.9452 | 0.9133 | 0.8402 | 0.7097 | 0.7645 | 0.2013 |
Method | AUC | ACC | G-Mean | Kappa | Dice | FDR |
---|---|---|---|---|---|---|
COSFIRE [6] | 0.7783 | 0.9210 | 0.7745 | 0.5698 | 0.6143 | 0.3890 |
U-Net [18] | 0.8365 | 0.9317 | 0.8000 | 0.6174 | 0.6559 | 0.3546 |
nnU-Net [20] | 0.8459 | 0.9342 | 0.8077 | 0.6346 | 0.6696 | 0.3328 |
FANet [21] | 0.8536 | 0.9373 | 0.8214 | 0.6578 | 0.6935 | 0.3236 |
OCTA-Net [17] | 0.8603 | 0.9386 | 0.8313 | 0.6721 | 0.7078 | 0.3018 |
COSNet (Ours) | 0.8674 | 0.9398 | 0.8386 | 0.6738 | 0.7104 | 0.3002 |
Method | AUC | ACC | G-Mean | Kappa | Dice | FDR |
---|---|---|---|---|---|---|
COSFIRE [6] | 0.8542 | 0.8645 | 0.7544 | 0.6583 | 0.7402 | 0.2213 |
U-Net [18] | 0.9156 | 0.9001 | 0.8021 | 0.6847 | 0.8054 | 0.2394 |
nnU-Net [20] | 0.9358 | 0.9089 | 0.8087 | 0.7065 | 0.8365 | 0.2175 |
FANet [21] | 0.9520 | 0.9275 | 0.8322 | 0.7344 | 0.8556 | 0.2160 |
OCTA-Net [17] | 0.9524 | 0.9246 | 0.8457 | 0.7857 | 0.9085 | 0.2037 |
COSNet (Ours) | 0.9676 | 0.9345 | 0.8628 | 0.7992 | 0.9168 | 0.1857 |
Method | AUC | ACC | G-Mean | Kappa | Dice | FDR |
---|---|---|---|---|---|---|
COSFIRE [6] | 0.8248 | 0.8392 | 0.7406 | 0.6084 | 0.7078 | 0.2859 |
U-Net [18] | 0.8876 | 0.8802 | 0.8085 | 0.7158 | 0.8045 | 0.3189 |
nnU-Net [20] | 0.8965 | 0.8854 | 0.8196 | 0.7295 | 0.8158 | 0.3064 |
FANet [21] | 0.9057 | 0.8935 | 0.8172 | 0.7354 | 0.8365 | 0.2930 |
OCTA-Net [17] | 0.9196 | 0.9107 | 0.8349 | 0.7536 | 0.8754 | 0.2778 |
COSNet (Ours) | 0.9388 | 0.9253 | 0.8457 | 0.7768 | 0.8869 | 0.2874 |
Method | AUC | ACC | G-Mean | Kappa | Dice | FDR |
---|---|---|---|---|---|---|
1 1 | 0.8874 | 0.8645 | 0.7937 | 0.6673 | 0.7284 | 0.2524 |
2 2 | 0.9199 | 0.9064 | 0.8285 | 0.6889 | 0.7463 | 0.2114 |
3 3 | 0.9307 | 0.9092 | 0.8344 | 0.7002 | 0.7561 | 0.2135 |
4 4 | 0.9321 | 0.9110 | 0.8375 | 0.7053 | 0.7604 | 0.2057 |
5 5 | 0.9304 | 0.9002 | 0.8285 | 0.6974 | 0.7552 | 0.2103 |
6 6 | 0.9452 | 0.9133 | 0.8402 | 0.7097 | 0.7645 | 0.2013 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ma, Z.; Feng, D.; Wang, J.; Ma, H. Retinal OCTA Image Segmentation Based on Global Contrastive Learning. Sensors 2022, 22, 9847. https://doi.org/10.3390/s22249847
Ma Z, Feng D, Wang J, Ma H. Retinal OCTA Image Segmentation Based on Global Contrastive Learning. Sensors. 2022; 22(24):9847. https://doi.org/10.3390/s22249847
Chicago/Turabian StyleMa, Ziping, Dongxiu Feng, Jingyu Wang, and Hu Ma. 2022. "Retinal OCTA Image Segmentation Based on Global Contrastive Learning" Sensors 22, no. 24: 9847. https://doi.org/10.3390/s22249847
APA StyleMa, Z., Feng, D., Wang, J., & Ma, H. (2022). Retinal OCTA Image Segmentation Based on Global Contrastive Learning. Sensors, 22(24), 9847. https://doi.org/10.3390/s22249847