Performance Analysis of Deep-Neural-Network-Based Automatic Diagnosis of Diabetic Retinopathy
<p>Proposed framework for the detection of DR.</p> "> Figure 2
<p>Schematics of CNN model for the detection of different DR stages.</p> "> Figure 3
<p>The adopted DTL process.</p> "> Figure 4
<p>The DTL with pre-trained and learnable weights.</p> "> Figure 5
<p>The pre-trained architecture of AlexNet.</p> "> Figure 6
<p>The pre-trained architecture of GoogleNet.</p> "> Figure 7
<p>The pre-trained architecture of Inception V4.</p> "> Figure 8
<p>The pre-trained architecture of Inception ResNet V2.</p> "> Figure 9
<p>The pre-trained architecture of ResNeXt-50.</p> ">
Abstract
:1. Introduction
1.1. State-of-the-Art on DR Detection Dsing Deep Learning Techniques
1.2. Research Gap
1.3. Contributions
- Our proposed methodology is flexible and automatically detects the classified pictures of patients with a higher accuracy. It classifies the dataset based on the severity of the disease in different stages/categories. Moreover, it helps doctors to select one or more CNN architectures for the diagnosis.
- We have analyzed the robustness of CNN architectures on our constructed (customized) dataset for the diagnosis of DR patients. A brief description of the customized dataset is provided in Section 1.4. It highlights how both CNN and dataset directly or indirectly affect performance evaluation. It implies that deep transfer learning techniques have been used with some pre-trained models and customized datasets to obtain high-accuracy results.
- We have also analyzed how the previously made architectures will perform on our dataset and how these architectures can be fine-tuned to obtain the best results on our dataset.
- To the best of our knowledge, the proposed work in this article is the first effort to consider the evaluation of recent CNNs, using a customized dataset.
1.4. Customized Dataset for Performance Evaluation
1.5. Organization
2. Proposed Approach
2.1. Pre-Processing and Enhancement of DR Dataset
2.2. CNN Architecture
3. Pre-Trained CNN Architectures and Performance Matrices
- Load the pictures from every type of folder.
- Use cv2 to resize images in (80, 80) and transmit images to array.
- Label every picture with type.
- Transform pictures and labels to numpy array.
- Split the images in half, and in an 80–20 split, the labels change into category labels.
- Set parameters of the trained model (e.g., epochs = 100, batch size = 32, etc.).
- Pickle may be used to save both the model and the label.
- In the end, we can visualize loss and accuracy.
3.1. AlexNet Architecture
3.2. GoogleNet Architecture
3.3. Inception V4 Architecture
3.4. Inception ResNet V2 Architecture
3.5. ResNeXt-50 Architecture
3.6. Performance Matrices
4. Results and Implementation
4.1. Creation of Custom Dataset
4.2. Experimental Setup
4.3. Results and Analysis
4.4. Comparison and Discussion
5. Conclusions and Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Mayo Clinic. Diabetic Retinopathy. 2018. Available online: https://www.mayoclinic.org/diseases-conditions/diabetic-retinopathy/symptoms-causes/syc-20371611 (accessed on 20 September 2021).
- Lee, R.; Wong, T.Y.; Sabanayagam, C. Epidemiology of diabetic retinopathy, diabetic macular edema and related vision loss. Eye Vis. 2015, 2, 1–17. [Google Scholar] [CrossRef] [Green Version]
- National Eye Institute. Facts about Diabetic Eye Disease; National Eye Institute: Bethesda, MD, USA, 2020. Available online: https://www.nei.nih.gov/learn-about-eye-health/eye-conditions-and-diseases/diabetic-retinopathy (accessed on 22 September 2021).
- Michael, W.S. Treatment of diabetic retinopathy: Recent advances and unresolved challenges. World J. Diabetes 2016, 7, 333–341. [Google Scholar] [CrossRef]
- National Eye Institute. Diabetic Retinopathy Data and Statistics; National Eye Institute: Bethesda, MD, USA, 2019. Available online: https://www.nei.nih.gov/learn-about-eye-health/resources-for-health-educators/eye-health-data-and-statistics/diabetic-retinopathy-data-and-statistics (accessed on 27 September 2021).
- Centers for Disease Control and Prevention. National Diabetes Statistics Report; Centers for Disease Control and Prevention: Atlanta, GA, USA, 2020. Available online: https://www.cdc.gov/diabetes/data/statistics-report/index.html (accessed on 1 October 2021).
- World Health Organization. Diabetes; World Health Organization: Geneva, Switzerland, 2021; Available online: https://www.who.int/health-topics/diabetes (accessed on 1 October 2021).
- Cheloni, R.; Gandolfi, S.A.; Signorelli, C.; Odone, A. Global prevalence of diabetic retinopathy: Protocol for a systematic review and meta-analysis. BMJ Open 2019, 9, e022188. [Google Scholar] [CrossRef]
- American Optometric Association. Diabetic Retinopathy; American Optometric Association: New York, NY, USA, 2021; Available online: https://www.aoa.org/healthy-eyes/eye-and-vision-conditions/diabetic-retinopathy (accessed on 2 October 2021).
- Khan, W. Diabetic Retinopathy Detection using Image Processing: A Survey. Int. J. Emerg. Technol. Res. 2013, 1, 16–20. [Google Scholar]
- Kumar, S. Diabetic Retinopathy Diagnosis with Ensemble Deep-Learning. In Proceedings of the 3rd International Conference on Vision, Image and Signal Processing, Vancouver, BC, Canada, 26–28 August 2019; Association for Computing Machinery: New York, NY, USA, 2019. [Google Scholar] [CrossRef]
- Prasad, D.K.; Vibha, L.; Venugopal, K.R. Early detection of diabetic retinopathy from digital retinal fundus images. In Proceedings of the 2015 IEEE Recent Advances in Intelligent Computational Systems (RAICS), Trivandrum, India, 10–12 December 2015; pp. 240–245. [Google Scholar] [CrossRef]
- Zeng, X.; Chen, H.; Luo, Y.; Ye, W. Automated Diabetic Retinopathy Detection Based on Binocular Siamese-Like Convolutional Neural Network. IEEE Access 2019, 7, 30744–30753. [Google Scholar] [CrossRef]
- Hemanth, D.J.; Deperlioglu, O.; Kose, U. An enhanced diabetic retinopathy detection and classification approach using deep convolutional neural network. Neural Comput. Appl. 2020, 32, 707–721. [Google Scholar] [CrossRef]
- Gao, Z.; Li, J.; Guo, J.; Chen, Y.; Yi, Z.; Zhong, J. Diagnosis of Diabetic Retinopathy Using Deep Neural Networks. IEEE Access 2019, 7, 3360–3370. [Google Scholar] [CrossRef]
- Junjun, P.; Zhifan, Y.; Dong, S.; Hong, Q. Diabetic Retinopathy Detection Based on Deep Convolutional Neural Networks for Localization of Discriminative Regions. In Proceedings of the 2018 International Conference on Virtual Reality and Visualization (ICVRV), Qingdao, China, 22–24 October 2018; pp. 46–52. [Google Scholar] [CrossRef]
- Chetoui, M.; Akhloufi, M.A.; Kardouchi, M. Diabetic Retinopathy Detection Using Machine Learning and Texture Features. In Proceedings of the 2018 IEEE Canadian Conference on Electrical Computer Engineering (CCECE), Quebec, QC, Canada, 13–16 May 2018; pp. 1–4. [Google Scholar] [CrossRef]
- Samanta, A.; Saha, A.; Satapathy, S.C.; Fernandes, S.L.; Zhang, Y.D. Automated detection of diabetic retinopathy using convolutional neural networks on a small dataset. Pattern Recognit. Lett. 2020, 135, 293–298. [Google Scholar] [CrossRef]
- Decencière, E.; Cazuguel, G.; Zhang, X.; Thibault, G.; Klein, J.C.; Meyer, F.; Marcotegui, B.; Quellec, G.; Lamard, M.; Danno, R.; et al. TeleOphta: Machine learning and image processing methods for teleophthalmology. IRBM 2013, 34, 196–203. [Google Scholar] [CrossRef]
- Staal, J.; Abramoff, M.D.; Niemeijer, M.; Viergever, M.A.; van Ginneken, B. Ridge-based vessel segmentation in color images of the retina. IEEE Trans. Med. Imaging 2004, 23, 501–509. [Google Scholar] [CrossRef]
- Hoover, A.D.; Kouznetsova, V.; Goldbaum, M. Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response. IEEE Trans. Med. Imaging 2000, 19, 203–210. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Grissworld Home Care. The 4 Stages of Diabetic Retinopathy: What You Can Expect. 2019. Available online: https://www.griswoldhomecare.com/blog/2015/january/the-4-stages-of-diabetic-retinopathy-what-you-ca/ (accessed on 3 October 2021).
- Gupta, A.; Chhikara, R. Diabetic Retinopathy: Present and Past. Procedia Comput. Sci. 2018, 132, 1432–1440. [Google Scholar] [CrossRef]
- Wang, H.; Yuan, G.; Zhao, X.; Peng, L.; Wang, Z.; He, Y.; Qu, C.; Peng, Z. Hard exudate detection based on deep model learned information and multi-feature joint representation for diabetic retinopathy screening. Comput. Methods Prog. Biomed. 2020, 191, 105398. [Google Scholar] [CrossRef]
- Amin, J.; Sharif, M.; Yasmin, M. A Review on Recent Developments for Detection of Diabetic Retinopathy. Scientifica 2016, 132, 1432–1440. [Google Scholar] [CrossRef] [Green Version]
- EyePACS. News, EyePACS. 14 June 2019. Available online: http://www.eyepacs.com/blog/news (accessed on 3 October 2021).
- California Healthcare Foundation. Diabetic Retinopathy Detection; California Healthcare Foundation: Oakland, CA, USA, 2020; Available online: https://www.kaggle.com/c/diabetic-retinopathy-detection/data (accessed on 4 October 2021).
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar] [CrossRef] [Green Version]
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar]
- Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A.A. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; pp. 4278–4284. [Google Scholar]
- Xie, S.; Girshick, R.; Dollár, P.; Tu, Z.; He, K. Aggregated Residual Transformations for Deep Neural Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 5987–5995. [Google Scholar] [CrossRef] [Green Version]
- Butt, C.; Gill, J.; Chun, D.; Babu, B.A. RETRACTED ARTICLE: Deep learning system to screen coronavirus disease 2019 pneumonia. Appl. Intell. 2020. [Google Scholar] [CrossRef]
- Yosinski, J.; Clune, J.; Bengio, Y.; Lipson, H. How Transferable Are Features in Deep Neural Networks? Adv. Neural Inf. Process. Syst. 2014, 27, 3320–3328. [Google Scholar]
- Dai, W.; Chen, Y.; Xue, G.R.; Yang, Q.; Yu, Y. Translated Learning: Transfer Learning across Different Feature Spaces. In Proceedings of the 21st International Conference on Neural Information Processing Systems, Kuching, Malaysia, 3–6 November 2014; Curran Associates Inc.: Red Hook, NY, USA, 2008; pp. 353–360. [Google Scholar]
- Ravishankar, H.; Sudhakar, P.; Venkataramani, R.; Thiruvenkadam, S.; Annangi, P.; Babu, N.; Vaidya, V. Understanding the Mechanisms of Deep Transfer Learning for Medical Images. In Deep Learning and Data Labeling for Medical Applications; Carneiro, G., Mateus, D., Peter, L., Bradley, A., Tavares, J.M.R.S., Belagiannis, V., Papa, J.P., Nascimento, J.C., Loog, M., Lu, Z., et al., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 188–196. [Google Scholar]
Stage | Normal | Non-Proliferate | Proliferate | ||
---|---|---|---|---|---|
Years | 0 | 3–5 | 5–10 | 10–15 | >15 |
Type of DR | N/A | Mild | Moderate | Severe | High-risk |
Condition of retina | Healthy | A few tiny bulges in the blood vessels | Little lumps in the veins with noticeable spots of blood spillage that stores the cholesterol. | Larger areas of blood leakage. Beading in veins that is unpredictable. The formation of new blood vessels at the optic circle. Vein occlusion. | High bleeding and the formation of new blood vessels elsewhere in the retina. Complete blindness. |
Parameters | AlexNet | GoogleNet | Inception V4 | Inception ResNet V2 | ResNeXt-50 |
---|---|---|---|---|---|
Optimizer | ADAM | ADAM | ADAM | ADAM | ADAM |
Base learning rate | 1e-5 | 1e-5 | 1e-5 | 1e-5 | 1e-5 |
Learning decay rate | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 |
Momentum | 0.9 | 0.9 | 0.9 | 0.9 | 0.9 |
RMSprop | 0.999 | 0.999 | 0.999 | 0.999 | 0.999 |
Dropout rate | 0.5 | 0.5 | 0.5 | 0.5 | 0.5 |
# of epochs | 30 | 30 | 30 | 30 | 30 |
Train batch size | 32 | 32 | 32 | 32 | 32 |
Test batch size | 8 | 8 | 8 | 8 | 8 |
Total number of parameters | 60 M | 4 M | 43 M | 56 M | 27.56 M |
Classifier | Folds | TP | TN | FP | FN | Accuracy (%) | Specificity (%) | Precision (%) | Recall (%) | Fscore (%) |
---|---|---|---|---|---|---|---|---|---|---|
AlexNet | F1 | 37 | 210 | 35 | 12 | 84.01 | 85.71 | 51.38 | 75.51 | 61.15 |
F2 | 38 | 210 | 37 | 12 | 83.50 | 85.02 | 50.66 | 76.0 | 60.80 | |
F3 | 38 | 214 | 27 | 8 | 87.80 | 88.79 | 58.46 | 82.60 | 68.46 | |
F4 | 37 | 216 | 27 | 8 | 87.84 | 88.88 | 57.81 | 82.22 | 67.89 | |
F5 | 37 | 216 | 27 | 8 | 87.84 | 88.88 | 57.81 | 82.22 | 67.89 | |
GoogleNet | F1 | 38 | 219 | 22 | 7 | 89.86 | 90.87 | 63.33 | 84.44 | 72.38 |
F2 | 40 | 222 | 19 | 7 | 90.97 | 92.11 | 67.79 | 85.10 | 75.47 | |
F3 | 38 | 221 | 18 | 8 | 90.87 | 92.46 | 67.85 | 82.61 | 74.51 | |
F4 | 37 | 220 | 18 | 8 | 90.81 | 92.43 | 67.27 | 82.22 | 74.00 | |
F5 | 38 | 220 | 18 | 7 | 91.16 | 92.43 | 67.85 | 84.44 | 75.24 | |
Inception V4 | F1 | 39 | 224 | 21 | 7 | 90.37 | 91.42 | 65.00 | 84.78 | 73.58 |
F2 | 39 | 224 | 17 | 8 | 91.32 | 92.94 | 69.64 | 82.97 | 75.72 | |
F3 | 39 | 225 | 16 | 8 | 91.66 | 93.36 | 70.90 | 82.97 | 76.47 | |
F4 | 39 | 226 | 18 | 8 | 91.06 | 92.62 | 68.42 | 82.98 | 75.00 | |
F5 | 39 | 222 | 20 | 8 | 90.31 | 91.73 | 66.10 | 82.98 | 73.58 | |
Inception ResNet V2 | F1 | 40 | 220 | 18 | 6 | 91.55 | 92.44 | 68.96 | 86.96 | 76.92 |
F2 | 40 | 221 | 14 | 6 | 92.88 | 94.04 | 74.07 | 86.96 | 80.00 | |
F3 | 40 | 227 | 14 | 7 | 92.71 | 94.19 | 74.07 | 85.11 | 79.21 | |
F4 | 41 | 226 | 13 | 5 | 93.68 | 94.56 | 75.92 | 89.13 | 82.00 | |
F5 | 39 | 223 | 18 | 6 | 91.61 | 92.53 | 68.42 | 86.67 | 76.47 | |
ResNeXt-50 | F1 | 41 | 233 | 8 | 5 | 95.47 | 96.68 | 83.67 | 89.13 | 86.31 |
F2 | 41 | 234 | 7 | 5 | 95.82 | 97.09 | 85.41 | 89.13 | 87.23 | |
F3 | 42 | 234 | 6 | 4 | 96.50 | 97.50 | 87.50 | 91.30 | 89.36 | |
F4 | 42 | 236 | 5 | 3 | 97.20 | 97.92 | 89.36 | 93.33 | 91.30 | |
F5 | 41 | 236 | 5 | 2 | 97.53 | 97.92 | 89.13 | 95.35 | 92.13 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Tariq, H.; Rashid, M.; Javed, A.; Zafar, E.; Alotaibi, S.S.; Zia, M.Y.I. Performance Analysis of Deep-Neural-Network-Based Automatic Diagnosis of Diabetic Retinopathy. Sensors 2022, 22, 205. https://doi.org/10.3390/s22010205
Tariq H, Rashid M, Javed A, Zafar E, Alotaibi SS, Zia MYI. Performance Analysis of Deep-Neural-Network-Based Automatic Diagnosis of Diabetic Retinopathy. Sensors. 2022; 22(1):205. https://doi.org/10.3390/s22010205
Chicago/Turabian StyleTariq, Hassan, Muhammad Rashid, Asfa Javed, Eeman Zafar, Saud S. Alotaibi, and Muhammad Yousuf Irfan Zia. 2022. "Performance Analysis of Deep-Neural-Network-Based Automatic Diagnosis of Diabetic Retinopathy" Sensors 22, no. 1: 205. https://doi.org/10.3390/s22010205
APA StyleTariq, H., Rashid, M., Javed, A., Zafar, E., Alotaibi, S. S., & Zia, M. Y. I. (2022). Performance Analysis of Deep-Neural-Network-Based Automatic Diagnosis of Diabetic Retinopathy. Sensors, 22(1), 205. https://doi.org/10.3390/s22010205