Efficient Gastrointestinal Disease Classification Using Pretrained Deep Convolutional Neural Network
<p>Proposed methodology.</p> "> Figure 2
<p>Sample images from dataset.</p> "> Figure 3
<p>Images after applying our proposed contrast-enhancement technique.</p> "> Figure 4
<p>Transfer learning process.</p> "> Figure 5
<p>Accuracy results for various train-to-test ratios using original dataset.</p> "> Figure 6
<p>Confusion matrix using softmax classifier for original dataset.</p> "> Figure 7
<p>Accuracy results using contrast-enhanced images for various train-to-test ratios.</p> "> Figure 8
<p>Confusion matrix using softmax classifier for contrast enhanced dataset.</p> "> Figure 9
<p>Accuracy results using contrast-enhanced images for various train-to-test ratios of augmented dataset.</p> "> Figure 10
<p>Confusion matrix using softmax classifier for contrast enhanced augmented dataset.</p> "> Figure 11
<p>Accuracy results using contrast-enhanced images for various train-to-test ratios of mixed dataset (original images + augmented contrast enhanced images).</p> "> Figure 12
<p>Confusion matrix using softmax classifier for mixed dataset (original images + augmented contrast enhanced images).</p> "> Figure 13
<p>Summarized performance comparison.</p> "> Figure 14
<p>Training accuracy on 3-fold, 5-fold and 10-fold cross validation.</p> "> Figure 15
<p>Training loss on 3-fold, 5-fold and 10-fold cross validation.</p> ">
Abstract
:1. Introduction
- An optimized brightness-controlled contrast-enhancement methodology based on the genetic algorithm is proposed;
- A lightweight pre-trained deep CNN model is deployed for significant feature extraction and classification;
- An analysis to quantify the impact of improvement of the proposed technique in terms of PSNR, MSE, VIF, SI, and IQI was performed;
- The significance of the proposed technique for the improvement in the classification of GI diseases was evaluated based on various performance metrics, such as accuracy, precision, recall, and F-measure.
2. Literature Review
3. Methodology
3.1. Dataset Collection and Preparation
3.2. Contrast Enhancement
Algorithm 1: Proposed Contrast Enhancement |
START |
is an RGB image Step 2: Separate RGB Channels of an image: Step 3: Apply CLAHE-RGB on Each Channel: = CLAHE-RGB() Step 4: Normalized each channel using Equation (1): Step 6: Apply CLAHE-HSV: CLAHE-HSV() Step 7: Convert Back Image to RGB form from CLAHE-HSV: |
END |
3.3. Data Augmentation
3.4. Transfer Learning
3.5. Features Extraction and Classification
4. Results and Discussion
Ablation Study
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Ling, T.; Wu, L.; Fu, Y.; Xu, Q.; An, P.; Zhang, J.; Hu, S.; Chen, Y.; He, X.; Wang, J.; et al. A deep learn-ing-based system for identifying differentiation status and delineating the margins of early gastric cancer in magnifying nar-row-band imaging endoscopy. Endoscopy 2020, 53, 469–477. [Google Scholar] [CrossRef] [PubMed]
- Sung, H.; Ferlay, J.; Siegel, R.L.; Laversanne, M.; Soerjomataram, I.; Jemal, A.; Bray, F. Global Cancer Statistics 2020: GLOBOCAN Estimates of Incidence and Mortality Worldwide for 36 Cancers in 185 Countries. CA Cancer J. Clin. 2021, 71, 209–249. [Google Scholar] [CrossRef] [PubMed]
- Korkmaz, M.F. Artificial Neural Network by using HOG Features HOG_LDA_ANN. In Proceedings of the 2017 IEEE 15th International Symposium on Intelligent Systems and Informatics (SISY), Subotica, Serbia, 14–16 September 2017; pp. 327–332. [Google Scholar]
- Li, S.; Cao, J.; Yao, J.; Zhu, J.; He, X.; Jiang, Q. Adaptive aggregation with self-attention network for gastrointestinal image classification. IET Image Process. 2022, 16, 2384–2397. [Google Scholar] [CrossRef]
- Siegel, R.L.; Miller, K.; Jemal, A. Cancer statistics, 2015. CA Cancer J. Clin. 2015, 65, 5–29. [Google Scholar] [CrossRef]
- Azhari, H.; King, J.; Underwood, F.; Coward, S.; Shah, S.; Ho, G.; Chan, C.; Ng, S.; Kaplan, G. The Global Incidence of Peptic Ulcer Disease at the Turn of the 21st Century: A Study of the Organization for Economic Co—Operation and Development (OECD). Am. J. Gastroenterol. 2018, 113, S682–S684. [Google Scholar] [CrossRef]
- Kim, N.H.; Jung, Y.S.; Jeong, W.S.; Yang, H.-J.; Park, S.-K.; Choi, K.; Park, D.I. Miss rate of colorectal neoplastic polyps and risk factors for missed polyps in consecutive colonoscopies. Intest. Res. 2017, 15, 411–418. [Google Scholar] [CrossRef] [Green Version]
- Iddan, G.; Meron, G.; Glukhovsky, A.; Swain, P. Wireless capsule endoscopy. Nature 2000, 405, 417. [Google Scholar] [CrossRef]
- Muruganantham, P.; Balakrishnan, S.M. Attention Aware Deep Learning Model for Wireless Capsule Endoscopy Lesion Classification and Localization. J. Med Biol. Eng. 2022, 42, 157–168. [Google Scholar] [CrossRef]
- Khan, M.A.; Khan, M.A.; Ahmed, F.; Mittal, M.; Goyal, L.M.; Hemanth, D.J.; Satapathy, S.C. Gastrointestinal diseases segmentation and classification based on duo-deep architectures. Pattern Recognit. Lett. 2019, 131, 193–204. [Google Scholar] [CrossRef]
- Khan, M.A.; Sarfraz, M.S.; Alhaisoni, M.; Albesher, A.A.; Wang, S.; Ashraf, I. StomachNet: Optimal Deep Learning Features Fusion for Stomach Abnormalities Classification. IEEE Access 2020, 8, 197969–197981. [Google Scholar] [CrossRef]
- Amiri, Z.; Hassanpour, H.; Beghdadi, A. Feature extraction for abnormality detection in capsule endoscopy images. Biomed. Signal Process. Control. 2021, 71, 103219. [Google Scholar] [CrossRef]
- Khan, M.; Ashraf, I.; Alhaisoni, M.; Damaševičius, R.; Scherer, R.; Rehman, A.; Bukhari, S. Multimodal Brain Tumor Classification Using Deep Learning and Robust Feature Selection: A Machine Learning Application for Radiologists. Diagnostics 2020, 10, 565. [Google Scholar] [CrossRef]
- Cicceri, G.; De Vita, F.; Bruneo, D.; Merlino, G.; Puliafito, A. A deep learning approach for pressure ulcer prevention using wearable computing. Human-Centric Comput. Inf. Sci. 2020, 10, 5. [Google Scholar] [CrossRef]
- Wong, G.L.H.; Ma, A.J.; Deng, H.; Ching, J.Y.L.; Wong, V.W.S.; Tse, Y.K.; Yip, T.C.-F.; Lau, L.H.-S.; Liu, H.H.-W.; Leung, C.M.; et al. Machine learning model to predict recurrent ulcer bleeding in patients with history of idiopathic gastroduodenal ulcer bleeding. APT—Aliment. Pharmacol. Therapeutics 2019, 49, 912–918. [Google Scholar] [CrossRef] [PubMed]
- Wang, S.; Xing, Y.; Zhang, L.; Gao, H.; Zhang, H. Second glance framework (secG): Enhanced ulcer detection with deep learning on a large wireless capsule endoscopy dataset. In Proceedings of the Fourth International Workshop on Pattern Recognition, Nanjing, China, 28–30 June 2019; Volume 11198. [Google Scholar]
- Majid, A.; Khan, M.A.; Yasmin, M.; Rehman, A.; Yousafzai, A.; Tariq, U. Classification of stomach infections: A paradigm of convolutional neural network along with classical features fusion and selection. Microsc. Res. Tech. 2020, 83, 562–576. [Google Scholar] [CrossRef]
- Usman, M.A.; Satrya, G.; Shin, S.Y. Detection of small colon bleeding in wireless capsule endoscopy videos. Comput. Med Imaging Graph. 2016, 54, 16–26. [Google Scholar] [CrossRef] [PubMed]
- Iakovidis, D.; Koulaouzidis, A. Automatic lesion detection in capsule endoscopy based on color saliency: Closer to an essential adjunct for reviewing software. Gastrointest Endosc. 2014, 80, 877–883. [Google Scholar] [CrossRef]
- Noya, F.; Alvarez-Gonzalez, M.A.; Benitez, R. Automated angiodysplasia detection from wireless capsule endoscopy. In Proceedings of the 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Jeju, South Korea, 11–15 July 2017; pp. 3158–3161. [Google Scholar] [CrossRef]
- Li, B.; Meng, M.Q.-H. Texture analysis for ulcer detection in capsule endoscopy images. Image Vis. Comput. 2009, 27, 1336–1342. [Google Scholar] [CrossRef]
- Fu, Y.; Zhang, W.; Mandal, M.; Meng, M.Q.-H. Computer-Aided Bleeding Detection in WCE Video. IEEE J. Biomed. Heal. Inform. 2013, 18, 636–642. [Google Scholar] [CrossRef]
- Pan, G.; Yan, G.; Qiu, X.; Cui, J. Bleeding Detection in Wireless Capsule Endoscopy Based on Probabilistic Neural Network. J. Med Syst. 2010, 35, 1477–1484. [Google Scholar] [CrossRef]
- Li, B.; Meng, M.Q.-H. Computer-Aided Detection of Bleeding Regions for Capsule Endoscopy Images. IEEE Trans. Biomed. Eng. 2009, 56, 1032–1039. [Google Scholar] [CrossRef] [PubMed]
- Mohapatra, S.; Pati, G.K.; Mishra, M.; Swarnkar, T. Gastrointestinal abnormality detection and classification using empirical wavelet transform and deep convolutional neural network from endoscopic images. Ain Shams Eng. J. 2023, 14, 101942. [Google Scholar] [CrossRef]
- Koyama, S.; Okabe, Y.; Suzuki, Y.; Igari, R.; Sato, H.; Iseki, C.; Tanji, K.; Suzuki, K.; Ohta, Y. Differing clinical features between Japanese siblings with cerebrotendinous xanthomatosis with a novel compound heterozygous CYP27A1 mutation: A case report. BMC Neurol. 2022, 22, 193. [Google Scholar] [CrossRef] [PubMed]
- Higuchi, N.; Hiraga, H.; Sasaki, Y.; Hiraga, N.; Igarashi, S.; Hasui, K.; Ogasawara, K.; Maeda, T.; Murai, Y.; Tatsuta, T.; et al. Automated evaluation of colon capsule endoscopic severity of ulcerative colitis using ResNet50. PLoS ONE 2022, 17, e0269728. [Google Scholar] [CrossRef] [PubMed]
- Ji, X.; Xu, T.; Li, W.; Liang, L. Study on the classification of capsule endoscopy images. EURASIP J. Image Video Process. 2019, 2019, 1–7. [Google Scholar] [CrossRef] [Green Version]
- Szczypiński, P.; Klepaczko, A.; Strzelecki, M. An Intelligent Automated Recognition System of Abnormal Structures in WCE Images. In Proceedings, Part I 6, Proceedings of the Hybrid Artificial Intelligent Systems: 6th International Conference, HAIS 2011; Wroclaw, Poland, 23–25 May 2011, Lecture Notes in Computer Science 6678; Springer: Berlin, Heidelberg, 2011; pp. 140–147. [Google Scholar] [CrossRef]
- Patel, V.; Armstrong, D.; Ganguli, M.P.; Roopra, S.; Kantipudi, N.; Albashir, S.; Kamath, M.V. Deep Learning in Gastrointestinal Endoscopy. Crit. Rev. Biomed. Eng. 2016, 44, 493–504. [Google Scholar] [CrossRef]
- Lee, J.H.; Kim, Y.J.; Kim, Y.W.; Park, S.; Choi, Y.-I.; Park, D.K.; Kim, K.G.; Chung, J.-W. Spotting malignancies from gastric endoscopic images using deep learning. Surg. Endosc. 2019, 33, 3790–3797. [Google Scholar] [CrossRef]
- Khan, M.A.; Lali, M.I.U.; Sharif, M.; Javed, K.; Aurangzeb, K.; Haider, S.I.; Altamrah, A.S.; Akram, T. An Optimized Method for Segmentation and Classification of Apple Diseases Based on Strong Correlation and Genetic Algorithm Based Feature Selection. IEEE Access 2019, 7, 46261–46277. [Google Scholar] [CrossRef]
- Yuan, Y.; Wang, J.; Li, B.; Meng, M.Q.-H. Saliency Based Ulcer Detection for Wireless Capsule Endoscopy Diagnosis. IEEE Trans. Med Imaging 2015, 34, 2046–2057. [Google Scholar] [CrossRef]
- Pogorelov, K.; Randel, K.R.; Griwodz, C.; Eskeland, S.L.; de Lange, T.; Johansen, D.; Spampinato, C.; Dang-Nguyen, D.-T.; Lux, M.; Schmidt, P.T.; et al. KVASIR: A multi-class image dataset for computer aided gastrointestinal disease detection. In Proceedings of the 8th ACM on Multimedia Systems Conference, Taipei, Taiwan, 20–23 June 2017; pp. 164–169. [Google Scholar]
- Borgli, H.; Thambawita, V.; Smedsrud, P.H.; Hicks, S.; Jha, D.; Eskeland, S.L.; Randel, K.R.; Pogorelov, K.; Lux, M.; Nguyen, D.T.D.; et al. HyperKvasir: A comprehensive multi-class image and video dataset for gastrointestinal endoscopy. Sci. Data 2020, 7, 283. [Google Scholar] [CrossRef]
- Jain, S.; Seal, A.; Ojha, A.; Yazidi, A.; Bures, J.; Tacheci, I.; Krejcar, O. A deep CNN model for anomaly detection and localization in wireless capsule endoscopy images. Comput. Biol. Med. 2021, 137, 104789. [Google Scholar] [CrossRef]
- Lan, L.; Ye, C. Recurrent generative adversarial networks for unsupervised WCE video summarization. Knowledge-Based Syst. 2021, 222, 106971. [Google Scholar] [CrossRef]
- Alhajlah, M.; Noor, M.N.; Nazir, M.; Mahmood, A.; Ashraf, I.; Karamat, T. Gastrointestinal Diseases Classification Using Deep Transfer Learning and Features Optimization. Comput. Mater. Contin. 2023, 75, 2227–2245. [Google Scholar] [CrossRef]
- Shorten, C.; Khoshgoftaar, T.M. A survey on Image Data Augmentation for Deep Learning. J. Big Data 2019, 6, 60. [Google Scholar] [CrossRef]
- Noor, M.N.; Khan, T.A.; Haneef, F.; Ramay, M.I. Machine Learning Model to Predict Automated Testing Adoption. Int. J. Softw. Innov. 2022, 10, 1–15. [Google Scholar] [CrossRef]
- Noor, M.N.; Nazir, M.; Rehman, S.; Tariq, J. Sketch-Recognition using Pre-Trained Model. In Proceedings of the National Conference on Engineering and Computing Technology, Islamabad, Pakistan, 8 January 2021. [Google Scholar]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18-22 June 2018; pp. 4510–4520. [Google Scholar]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. 2017. Available online: https://arxiv.org/abs/1704.04861 (accessed on 13 January 2023).
- Bae, K.; Ryu, H.; Shin, H. Does Adam optimizer keep close to the optimal point? arXiv 2019, arXiv:1911.00289. [Google Scholar]
- Ho, Y.; Wookey, S. The Real-World-Weight Cross-Entropy Loss Function: Modeling the Costs of Mislabeling. IEEE Access 2020, 8, 4806–4813. [Google Scholar] [CrossRef]
- Shafiq, M.; Tian, Z.; Bashir, A.K.; Du, X.; Guizani, M. IoT malicious traffic identification using wrapper-based feature selection mechanisms. Comput. Secur. 2020, 94, 101863. [Google Scholar] [CrossRef]
- Bhattacharya, S.; Maddikunta, P.K.R.; Hakak, S.; Khan, W.Z.; Bashir, A.K.; Jolfaei, A.; Tariq, U. Antlion re-sampling based deep neural network model for classification of imbalanced multimodal stroke dataset. Multimedia Tools Appl. 2020, 81, 41429–41453. [Google Scholar] [CrossRef]
- Feng, L.; Ali, A.; Iqbal, M.; Bashir, A.K.; Hussain, S.A.; Pack, S. Optimal haptic communications over nanonetworks for e-health systems. IEEE Trans. Ind. Inform. 2019, 15, 3016–3027. [Google Scholar] [CrossRef] [Green Version]
- Seo, S.; Kim, Y.; Han, H.-J.; Son, W.C.; Hong, Z.-Y.; Sohn, I.; Shim, J.; Hwang, C. Predicting Successes and Failures of Clinical Trials With Outer Product–Based Convolutional Neural Network. Front. Pharmacol. 2021, 12, 670670. [Google Scholar] [CrossRef] [PubMed]
- Kumar, C.; Mubarak, D.M.N. Classification of Early Stages of Esophageal Cancer Using Transfer Learning. IRBM 2021, 43, 251–258. [Google Scholar] [CrossRef]
- Ahmed, A. Classification of Gastrointestinal Images Based on Transfer Learning and Denoising Convolutional Neural Networks. In Proceedings of the International Conference on Data Science and Applications: ICDSA 2021; Springer: Singapore, 2022; pp. 631–639. [Google Scholar]
- Escobar, J.; Sanchez, K.; Hinojosa, C.; Arguello, H.; Castillo, S. Accurate Deep Learning-based Gastrointestinal Disease Classification via Transfer Learning Strategy. In Proceedings of the 2021 XXIII Symposium on Image, Signal Processing and Artificial Vision (STSIVA), Popayán, Colombia, 15–17 September 2021; pp. 1–5. [Google Scholar]
- Bang, C.S. Computer-Aided Diagnosis of Gastrointestinal Ulcer and Hemorrhage Using Wireless Capsule Endoscopy: Systematic Review and Diagnostic Test Accuracy Meta-analysis. J. Med. Internet Res. 2021, 23, e33267. [Google Scholar] [CrossRef] [PubMed]
Performance Measure | Histogram Equalization | Adaptive Histogram Equalization | Contrast-Limited Adaptive Histogram Equalization | Our Proposed Method |
---|---|---|---|---|
PSNR | 19.22 | 22.67 | 24.48 | 37.44 |
Similarity Index | 0.99 | 0.90 | 0.88 | 0.86 |
Image Quality Index | 0.69 | 0.71 | 0.72 | 0.95 |
Mean Squared Error | 343.92 | 287.45 | 231.79 | 117.21 |
Visual Information Fidelity | 0.99 | 0.98 | 1.10 | 1.19 |
Symbols | Definition | Description |
---|---|---|
True Positive | The number of diseased images which are accurately classified. | |
False Positive | The number of diseased images which are incorrectly classified. | |
False Negative | The number of normal images which are incorrectly classified. | |
True Negative | The number of normal images which are accurately classified. |
Precision | Recall | F-Measure | Accuracy | |
---|---|---|---|---|
Softmax | 84.27% | 76.25% | 80.06% | 81.14% |
Linear SVM | 74.16% | 67.23% | 70.53% | 71.39% |
Quadratic SVM | 82.37% | 71.17% | 76.36% | 77.84% |
Cubic SVM | 77.31% | 69.64% | 73.27% | 75.05% |
Bayesian | 81.22% | 73.37% | 77.10% | 79.21% |
Precision | Recall | F-Measure | Accuracy | |
---|---|---|---|---|
Softmax | 91.14% | 83.92% | 87.38% | 90.23% |
Linear SVM | 87.25% | 81.37% | 84.20% | 84.58% |
Quadratic SVM | 89.19% | 83.44% | 86.22% | 87.94% |
Cubic SVM | 85.11% | 77.64% | 81.20% | 81.22% |
Bayesian | 82.71% | 74.51% | 78.39% | 79.31% |
Precision | Recall | F-Measure | Accuracy | |
---|---|---|---|---|
Softmax | 97.57% | 93.02% | 95.24% | 96.40% |
Linear SVM | 91.11% | 84.39% | 87.62% | 87.94% |
Quadratic SVM | 95.27% | 87.41% | 91.17% | 94.77% |
Cubic SVM | 92.02% | 86.20% | 89.02% | 90.45% |
Bayesian | 95.18% | 90.63% | 92.85% | 94.98% |
Precision | Recall | F-Measure | Accuracy | |
---|---|---|---|---|
Softmax | 95.33% | 90.22% | 92.71% | 93.21% |
Linear SVM | 90.21% | 83.27% | 86.60% | 88.94% |
Quadratic SVM | 92.27% | 87.41% | 89.77% | 90.12% |
Cubic SVM | 89.34% | 83.31% | 86.22% | 86.59% |
Bayesian | 93.84% | 86.87% | 90.22% | 91.58% |
Precision | Recall | F-Measure | Accuracy | |
---|---|---|---|---|
Softmax | 87.29% | 79.33% | 83.12% | 83.95% |
Linear SVM | 77.20% | 72.56% | 74.81% | 75.18% |
Quadratic SVM | 84.64% | 73.44% | 78.64% | 79.87% |
Cubic SVM | 79.77% | 72.42% | 75.92% | 76.52% |
Bayesian | 83.41% | 75.32% | 79.16% | 79.98% |
Precision | Recall | F-Measure | Accuracy | |
---|---|---|---|---|
Softmax | 93.24% | 89.45% | 91.31% | 94.60% |
Linear SVM | 87.35% | 81.57% | 84.36% | 85.04% |
Quadratic SVM | 92.06% | 83.87% | 87.77% | 91.13% |
Cubic SVM | 91.84% | 83.11% | 87.26% | 88.78% |
Bayesian | 92.31% | 88.20% | 90.21% | 91.14% |
Technique | Accuracy |
---|---|
Applied logistic and ridge regression on multiple extracted features [15] | 83.3% |
Second glance framework based on CNN [16] | 85.69% |
Different classifiers such as SVM, softmax, and decision trees are applied to colored, geometric, and texture features [17] | 93.64% |
Different pre-trained models are used for the extraction of features and multiple classifiers are applied [50] | 94.46% |
Initially preprocessed images and applied a modified VGGNet model [34] | 86.6% |
Initially de-noised images and applied pre-trained CNN, i.e., AlexNet [51] | 90.17% |
DenseNet201 [52] | 78.55% |
ResNet-50 [53] | 90.42% |
Proposed Model | 96.40% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Nouman Noor, M.; Nazir, M.; Khan, S.A.; Song, O.-Y.; Ashraf, I. Efficient Gastrointestinal Disease Classification Using Pretrained Deep Convolutional Neural Network. Electronics 2023, 12, 1557. https://doi.org/10.3390/electronics12071557
Nouman Noor M, Nazir M, Khan SA, Song O-Y, Ashraf I. Efficient Gastrointestinal Disease Classification Using Pretrained Deep Convolutional Neural Network. Electronics. 2023; 12(7):1557. https://doi.org/10.3390/electronics12071557
Chicago/Turabian StyleNouman Noor, Muhammad, Muhammad Nazir, Sajid Ali Khan, Oh-Young Song, and Imran Ashraf. 2023. "Efficient Gastrointestinal Disease Classification Using Pretrained Deep Convolutional Neural Network" Electronics 12, no. 7: 1557. https://doi.org/10.3390/electronics12071557
APA StyleNouman Noor, M., Nazir, M., Khan, S. A., Song, O. -Y., & Ashraf, I. (2023). Efficient Gastrointestinal Disease Classification Using Pretrained Deep Convolutional Neural Network. Electronics, 12(7), 1557. https://doi.org/10.3390/electronics12071557