Melanoma and Nevus Skin Lesion Classification Using Handcraft and Deep Learning Feature Fusion via Mutual Information Measures
<p>Block diagram of the novel Computer-Aided Detection (CAD) system.</p> "> Figure 2
<p>(<b>a</b>) Original image <math display="inline"><semantics> <mrow> <mi>I</mi> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </semantics></math>; (<b>b</b>) image (<b>a</b>) processed with a Gaussian filter.</p> "> Figure 3
<p>(<b>a</b>) Original image <math display="inline"><semantics> <mrow> <mi>I</mi> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </semantics></math>, (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>I</mi> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </semantics></math> on channel L, (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>I</mi> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </semantics></math> on channel a*, and (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>I</mi> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </semantics></math> on channel b*.</p> "> Figure 4
<p>Results of the thresholding stage: (<b>a</b>) Original image <math display="inline"><semantics> <mrow> <mi>I</mi> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </semantics></math>, (<b>b</b>) binary image <math display="inline"><semantics> <msub> <mi>I</mi> <mrow> <mi>T</mi> <msub> <mi>h</mi> <mi>L</mi> </msub> </mrow> </msub> </semantics></math> obtained from the threshold of the L channel, (<b>c</b>) binary image <math display="inline"><semantics> <msub> <mi>I</mi> <mrow> <mi>T</mi> <msub> <mi>h</mi> <mi>a</mi> </msub> </mrow> </msub> </semantics></math> obtained from the threshold of the a* channel, (<b>d</b>) binary image <math display="inline"><semantics> <msub> <mi>I</mi> <mrow> <mi>T</mi> <msub> <mi>h</mi> <mi>b</mi> </msub> </mrow> </msub> </semantics></math> obtained from the threshold of the b* channel.</p> "> Figure 5
<p>Results of the preprocessing stage: Original image <math display="inline"><semantics> <mrow> <mi>I</mi> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </semantics></math>, (<b>a</b>) Region of Interest (ROI) obtained, (<b>b</b>) segmented image <math display="inline"><semantics> <msub> <mi>S</mi> <msub> <mi>I</mi> <mrow> <mi>b</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </msub> </semantics></math>, (<b>c</b>) obtained asymmetry at 0°, and (<b>d</b>) obtained asymmetry at 90°.</p> "> Figure 6
<p>Distribution of classes on the ISIC2018/HAM10000 dataset.</p> ">
Abstract
:1. Introduction
2. Literature Survey
- Asymmetry A: The lesion is bisected into two perpendicular axes at 90° of each other, so as to yield the lowest possible asymmetry score. In other words, whether the lesion is symmetrical or not is determined. For each axis, where asymmetry is found, one point is added.
- Borders B: The lesion is divided into slices by eight axes determining whether a lesion has abrupt borders. If one segment presents an abrupt border, one point is added.
- Colors C: The lesion can contain one or more of the following colors: white, brownish, dark brown, black, blue, and red. They are generated by vessels and melanin concentrations, so for each one color founded, one point per color is added.
- Dermatoscopic Structures D: The lesion has the appearance of the following structures: dots, blobs, pigmented networks, and non-structured areas. A point is added for each structure spotted on the lesion.
Principal Contributions
- A brief survey of computer-aided detection methods that employ fusion between handcraft and deep learning features is presented.
- Despite the new tendencies of avoiding the ABCD medical algorithm or any of its variations, we utilized descriptors based on them, such as shape, color, and texture, as a new aggregation, and the extraction of deep learning features were used afterwards.
- A balanced method was employed due to the inconsistency of the ISIC database with respect to classes. A SMOTE oversampling technique was applied, which in this work demonstrates an improvement in performance at the differentiation of melanoma and benign lesion images.
- A fusing method that employs relevant mutual information obtained from handcraft and deep learning features was designed, and it appears to demonstrate better performance in comparison with state-of-the-art CAD systems.
3. Materials and Methods
3.1. Preprocessing
3.2. Handcraft Features
3.2.1. Shape Features
3.2.2. Colour Features
3.2.3. Texture Features
3.3. Deep Learning Features
3.3.1. Transfer Learning
- : Feature spaces are different.
- : The marginal probabilities of the distributions are different.
- : The label spaces are different.
- : The conditional probabilities are different.
3.3.2. Feature Extractor
3.3.3. Deep Learning Architectures
3.4. Algorithm Summary
3.5. Feature Selection
Mutual Information
Algorithm 1 Algorithm summary |
Require: PSL Image (a)Preprocessing
(b)Handcraft Features
(c)Deep Learning features
|
4. Results and Discussion
4.1. Experimental Results
4.1.1. Experimental Setup
4.1.2. Evaluation Metrics
4.1.3. Dataset
4.1.4. Balance Data
4.1.5. Experimental Results
5. Conclusions and Future Work
Author Contributions
Acknowledgments
Conflicts of Interest
Abbreviations
ELM | Epiluminance Microscopy |
MD | Medical Doctor |
PSL | Pigmented Skin Lesion |
ROI | Region of Interest |
GLCM | Gray Co-occurrence Matrix |
DL | Deep Learning |
CAD | Computer-Aided Detection |
SVM | Support Vector Machine |
RVM | Relevant Vector Machine |
LR | Logistic Regression |
CNN | Convolutional Neural Network |
TF | Transfer Learning |
MI | Mutual Information |
K-NN | K-Nearest Neighborhood |
MCC | Matthews Correlation Coefficient |
References
- Skin Cancers. Available online: http://www.who.int/uv/faq/skincancer/en/index1.html (accessed on 15 January 2020).
- Skin Cancer. Available online: https://www.wcrf.org/dietandcancer/skin-cancer (accessed on 15 January 2020).
- Baldi, A.; Quartulli, M.; Murace, R.; Dragonetti, E.; Manganaro, M.; Guerra, O.; Bizzi, S. Automated Dermoscopy Image Analysis of Pigmented Skin Lesions. Cancers 2010, 2, 262–273. [Google Scholar] [CrossRef]
- Almaraz-Damian, J.A.; Ponomaryov, V.; Rendon-Gonzalez, E. Melanoma CADe based on ABCD Rule and Haralick Texture Features. In Proceedings of the 2016 9th International Kharkiv Symposium on Physics and Engineering of Microwaves, Millimeter and Submillimeter Waves (MSMW), Kharkiv, Ukraine, 20–24 June 2016; pp. 1–4. [Google Scholar]
- Li, Y.; Shen, L. Skin Lesion Analysis towards Melanoma Detection Using Deep Learning Network. Sensors 2018, 18, 556. [Google Scholar] [CrossRef] [Green Version]
- Lopez, A.R.; Giro-i-Nieto, X.; Burdick, J.; Marques, O. Skin lesion classification from dermoscopic images using deep learning techniques. In Proceedings of the 2017 13th IASTED International Conference on Biomedical Engineering (BioMed), Innsbruck, Austria, 20–21 February 2017; pp. 49–54. [Google Scholar]
- Hosny, K.M.; Kassem, M.A.; Foaud, M.M. Classification of skin lesions using transfer learning and augmentation with Alex-net. PLoS ONE 2019, 14, e0217293. [Google Scholar] [CrossRef] [Green Version]
- Castillejos, H.; Ponomaryov, V.; Nino-De-Rivera, L.; Golikov, V. Wavelet Transform Fuzzy Algorithms for Dermoscopic Image Segmentation. Comput. Math. Method. Med. 2012, 2012, 578721. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Nachbar, F.; Stolz, W.; Merkle, T.; Cognetta, A.B.; Vogt, T.; Landthaler, M.; Bilek, P.; Braun-Falco, O.; Plewig, G. The ABCD Rule of Dermatoscopy. J. Am. Acad. Dermatol. 1994, 30, 551–559. [Google Scholar] [CrossRef] [Green Version]
- Zalaudek, I.; Argenziano, G.; Soyer, H.P.; Corona, R.; Sera, F.; Blum, A.; Braun, R.P.; Cabo, H.; Ferrara, G.; Kopf, A.W.; et al. Three-point checklist of dermoscopy: An open internet study. Br. J. Dermatol. 2005, 154, 431–437. [Google Scholar] [CrossRef] [PubMed]
- Henning, J.S.; Dusza, S.W.; Wang, S.Q.; Marghoob, A.A.; Rabinovitz, H.S.; Polsky, D.; Kopf, A.W. The CASH (color, archi-tecture, symmetry, and homogeneity) algorithm for dermoscopy. J. Am. Acad. Dermatol. 2007, 56, 45–52. [Google Scholar] [CrossRef] [PubMed]
- Stolz, W.; Riemann, A.; Cognetta, A.B.; Pillet, L.; Abmayr, W.; Hölzel, D.; Bilek, P.; Nachbar, F.; Landthaler, M.; Braun-Falco, O. ABCD rule of dermatoscopy: A new practical method for early recognition of malignant melanoma. Eur. J. Dermatol. 1994, 4, 521–527. [Google Scholar]
- Argenziano, G.; Fabbrocini, G.; Carli, P.; De Giorgi, V.; Sammarco, E.; Delfino, M. Epiluminescence Microscopy for the Diagnosis of Doubtful Melanocytic Skin Lesions: Comparison of the ABCD Rule of Dermatoscopy and a New 7-Point Checklist Based on Pattern Analysis. Arch. Dermatol. 1998, 134, 1563–1570. [Google Scholar] [CrossRef] [Green Version]
- Melanoma Education Foundation. Finding Melanoma Early: Warning Signs & Photos. Available online: https://www.skincheck.org/Page4.php (accessed on 2 February 2020).
- MoleMap NZ Official Site. The EFG of Nodular Melanomas. Available online: https://www.molemap.co.nz/knowledge-centre/efg-nodular-melanomas (accessed on 2 February 2020).
- Jensen, J.D.; Elewski, B.E. The ABCDEF Rule: Combining the “ABCDE Rule” and the “Ugly Duckling Sign” in an Effort to Improve Patient Self-Screening Examinations. J. Clin. Aesthet. Dermatol. 2015, 8, 15. [Google Scholar]
- Kalkhoran, S.; Milne, O.; Zalaudek, I.; Puig, S.; Malvehy, J.; Kelly, J.W.; Marghoob, A.A. Historical, Clinical, and Dermoscopic Characteristics of Thin Nodular Melanoma. Arch. Dermatol. 2010, 146, 311–318. [Google Scholar] [CrossRef] [Green Version]
- Adjed, F.; Gardezi, S.J.S.; Ababsa, F.; Faye, I.; Dass, S.C. Fusion of structural and textural features for melanoma recognition. IET Comput. Vis. 2018, 12, 185–195. [Google Scholar] [CrossRef]
- Mendonça, T.; Ferreira, P.M.; Marques, J.; Marcal, A.R.S.; Rozeira, J. PH2—A dermoscopic image database for research and benchmarking. In Proceedings of the 35th International Conference of the IEEE Engineering in Medicine and Biology Society, Osaka, Japan, 3–7 July 2013. [Google Scholar]
- Hagerty, J.R.; Stanley, R.J.; Almubarak, H.A.; Lama, N.; Kasmi, R.; Guo, P.; Stoecker, W.V. Deep Learning and Handcrafted Method Fusion: Higher Diagnostic Accuracy for Melanoma Dermoscopy Images. IEEE J. Biomed. Health Inform. 2019, 23, 1385–1391. [Google Scholar] [CrossRef]
- Codella, N.; Rotemberg, V.; Tschandl, P.; Celebi, M.E.; Dusza, S.; Gutman, D.; Helba, B.; Kalloo, A.; Liopyris, K.; Marchetti, M.; et al. Skin Lesion Analysis Toward Melanoma Detection 2018: A Challenge Hosted by the International Skin Imaging Collaboration (ISIC). arXiv 2018, arXiv:1902.03368. [Google Scholar]
- Li, X.; Wu, J.; Jiang, H.; Chen, E.Z.; Dong, X.; Rong, R. Skin Lesion Classification Via Combining Deep Learning Features and Clinical Criteria Representations. bioRxiv 2018. bioRxiv:382010. [Google Scholar]
- Ke, G.; Meng, Q.; Finley, T.; Wang, T.; Chen, W.; Ma, W.; Ye, Y.; Liu, T.-Y. LightGBM: A Highly Efficient Gradient Boosting Decision Tree. In Proceedings of the Advances in Neural Information Processing Systems 30 (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017; pp. 3149–3157. [Google Scholar]
- Abbas, Q.; Celebi, M.E. DermoDeep-A classification of melanoma-nevus skin lesions using multi-feature fusion of visual features and deep neural network. Multimed. Tool. Appl. 2019, 78, 23559–23580. [Google Scholar] [CrossRef]
- Krizhevsky, A. Learning Multiple Layers of Features from Tiny Images; Technical Report TR-2009; University of Toronto: Toronto, ON, Canada, 2009. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 2818–2826. [Google Scholar]
- Chollet, F. Xception: Deep Learning with Depthwise Separable Convolutions. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2016; pp. 1800–1807. [Google Scholar]
- Huang, G.; Liu, Z.; Weinberger, K.Q. Densely Connected Convolutional Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2016; pp. 2261–2269. [Google Scholar]
- Orea-Flores, I.Y.; Gallegos-Funes, F.J.; Arellano-Reynoso, A. Local Complexity Estimation Based Filtering Method in Wavelet Domain for Magnetic Resonance Imaging Denoising. Entropy 2019, 21, 401. [Google Scholar] [CrossRef] [Green Version]
- Goritskiy, Y.; Kazakov, V.; Shevchenko, O.; Mendoza, F. Model of Random Field with Piece-Constant Values and Sampling-Restoration Algorithm of Its Realizations. Entropy 2019, 21, 792. [Google Scholar] [CrossRef] [Green Version]
- Yang, M.; Kpalma, K.; Joseph, R. A Survey of Shape Feature Extraction Techniques. In Pattern Recognition Techniques, Technology and Applications; Yin, P.-Y., Ed.; InTech: London, UK, 2008; Available online: http://www.intechopen.com/books/pattern_recognition_techniques_technology_and_applications/a_survey_of_shape_feature_extraction_techniques (accessed on 10 January 2020). [CrossRef] [Green Version]
- Alceu, B.; Fornaciali, M.; Valle, E.; Avila, S. (De) Constructing Bias on Skin Lesion Datasets. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Long Beach, CA, USA, 16–20 June 2019; pp. 2766–2774. [Google Scholar]
- Sirakov, N.M.; Mete, M.; Chakrader, N.S. Automatic boundary detection and symmetry calculation in dermoscopy images of skin lesions, Department of Mathematics, 2 Department of Computer Science. IEEE Intern. Conf. Image Process. 2011, 1, 1637–1640. [Google Scholar]
- Haralick, R.; Shanmugam, K.; Dinstein, I. Textural Features for Image Classification. IEEE Trans. Syst. Man Cybern. 1973, 3, 610–621. [Google Scholar] [CrossRef] [Green Version]
- LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef] [Green Version]
- Voulodimos, A.; Doulamis, N.; Doulamis, A.; Protopapadakis, E. Deep Learning for Computer Vision: A Brief Review. Comput. Intell. Neurosci. 2018, 2018, 7068349. [Google Scholar] [CrossRef] [PubMed]
- Chollet, F. Deep Learning with Python, 1st ed.; Manning Publications Co.: Greenwich, CT, USA, 2017. [Google Scholar]
- Yosinski, J.; Clune, J.; Bengio, Y.; Lipson, H. How transferable are features in deep neural networks? In Proceedings of the 27th International Conference on Neural Information Processing Systems—Volume 2; MIT Press: Cambridge, MA, USA; pp. 3320–3328.
- Rawat, W.; Wang, Z. Deep convolutional neural networks for image classification: A comprehensive review. Neural Comput. 2017, 29, 2352–2449. [Google Scholar] [CrossRef] [PubMed]
- Pan, S.J.; Yang, Q. A Survey on Transfer Learning. IEEE Trans. Knowl. Data Eng. 2010, 22, 1345–1359. [Google Scholar] [CrossRef]
- Weiss, K.; Khoshgoftaar, T.M.; Wang, D. A survey of transfer learning. J. Big Data 2016, 3. [Google Scholar] [CrossRef] [Green Version]
- Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. ImageNet Large Scale Visual Recognition Challenge. Int. J. Comput. Vis. 2015. [Google Scholar] [CrossRef] [Green Version]
- Bommert, A.; Sun, X.; Bischl, B.; Rahnenführer, J.; Lang, M. Benchmark for Filter Methods for Feature Selection in High-Dimensional Classification Data. Comput. Stat. Data Anal. 2020, 143, 106839. [Google Scholar] [CrossRef]
- Karczmarek, P.; Pedrycz, W.; Kiersztyn, A.; Rutka, P. A study in facial features saliency in face recognition: An analytic hierarchy process approach. Soft Comput. 2017, 21, 7503–7517. [Google Scholar] [CrossRef] [Green Version]
- Kozachenko, L.F.; Leonenko, N.N. Sample Estimate of the Entropy of a Random Vector. Probl. Peredachi Inf. 1987, 23, 9–16. [Google Scholar]
- Kraskov, A.; Stögbauer, H.; Grassberger, P. Estimating mutual information. Phys. Rev. E 2004, 69, 066138. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Houghton, C. Calculating mutual information for spike trains and other data with distances but no coordinates. R. Soc. Open Sci. 2015, 2. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Ross, B.C. Mutual information between discrete and continuous data sets. PLoS ONE 2014, 9. [Google Scholar] [CrossRef]
- Vapnik, V. Statistical Learning Theory; John Wiley: New York, NY, USA, 1998. [Google Scholar]
- Tipping, M.E. Sparse bayesian learning and the relevance vector machine. J. Mach. Learn. Res. 2001, 1, 211–244. [Google Scholar]
- Bishop, C. Probabilistic graphical models and their role in machine learning. In Proceedings of the NATO ASI–LTP 2002 Tutorial, Leuven, Belgium, 8–19 July 2002. [Google Scholar]
- Chollet, F. Keras. 2015. Available online: https://keras.io (accessed on 15 January 2020).
- Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
- Coelho, L.P. Mahotas: Open source software for scriptable computer vision. J. Open Res. Softw. 2013, 1, e3. [Google Scholar] [CrossRef]
- Wen, Z.; Shi, J.; Li, Q.; He, B.; Chen, J. ThunderSVM: A fast SVM library on GPUs and CPUs. J. Mach. Learn. Res. 2018, 19, 797–801. [Google Scholar]
- Lemaître, G.; Nogueira, F.; Aridas, C.K. Imbalanced-learn: A python toolbox to tackle the curse of imbalanced datasets in machine learning. J. Mach. Learn. Res. 2017, 18, 559–563. [Google Scholar]
- Powers, D.M. Evaluation: From precision, recall and F-measure to ROC, informedness, markedness and correlation. J. Mach. Learn. Technol. 2011. [Google Scholar] [CrossRef]
- Tschandl, P.; Rosendahl, C.; Kittler, H. The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Sci Data 2018, 5, 180161. [Google Scholar] [CrossRef] [PubMed]
- Chawla, N.V.; Bowyer, K.W.; Hall, L.O.; Kegelmeyer, W.P. SMOTE: Synthetic minority over-sampling technique. J. Artif. Intell. Res. 2002, 16, 321–357. [Google Scholar] [CrossRef]
- Celebi, M.E.; Kingravi, H.A.; Uddin, B.; Iyatomi, H.; Aslandogan, Y.A.; Stoecker, W.V.; Moss, R.H. A methodological approach to the classification of dermoscopy images. Comput. Med. Imag. Graph. 2007, 31, 362–373. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Capdehourat, G.; Corez, A.; Bazzano, A.; Alonso, R.; Musé, P. Toward a combined tool to assist dermatologists in melanoma detection from dermoscopic images of pigmented skin lesions. Pattern Recognit. Lett. 2011, 32, 2187–2196. [Google Scholar] [CrossRef]
- García, V.; Mollineda, R.A.; Sánchez, J.S. Index of Balanced Accuracy: A Performance Measure for Skewed Class Distributions; Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin/Heidelberg, Germany, 2009; Volume 5524, pp. 441–448. [Google Scholar] [CrossRef] [Green Version]
CNN Architecture | |
---|---|
VGG19 | 4096 |
VGG16 | 4096 |
ResNET-50 | 2048 |
Inception v3 | 2048 |
Mobilenet v1 | 1024 |
Mobilenet v2 | 1280 |
DenseNET-201 | 1920 |
Xception | 2048 |
Inception v3 + Handcraft Features | |
---|---|
Feature | Mutual Info. value |
1448 | 3.813956037657107 × 10 |
648 | 1.4314941993776031 × 10 |
1020 | 3.4236070477255964 × 10 |
804 | 3.515106075036023 × 10 |
333 | 4.213506368255793 × 10 |
562 | 4.4971751217204314 × 10 |
91 | 5.133156269931938 × 10 |
852 | 6.514623080855486 × 10 |
1689 | 7.53828133426282 × 10 |
1788 | 7.605629690621285 × 10 |
Inception v3 + Handcraft Features | |
---|---|
Feature | Mutual Info. value |
Mean_b | 0.09829037284938669 |
Min_b | 0.09536305749234275 |
Max_b | 0.06834317593527395 |
578 | 0.06131147578510121 |
Min_G | 0.05817685924318594 |
116 | 0.055293703553799256 |
389 | 0.05446628875169646 |
464 | 0.05424503079140042 |
Var_L | 0.05420063226575533 |
288 | 0.053949117014718606 |
Acc. Train | Acc. Test | Sensibility | Specificity | Precision | F-Score | AUC | G-Mean | IBA | MCC | |
---|---|---|---|---|---|---|---|---|---|---|
VGG16 | 88.60 | 84.90 | 79.23 | 0.85 | 88.74 | 83.71 | 84.79 | 0.85 | 0.72 | 0.7012 |
VGG19 | 90.23 | 87.14 | 82.46 | 0.87 | 90.44 | 86.26 | 87.05 | 0.87 | 0.76 | 0.7451 |
Mobilenet v1 | 91.48 | 89.32 | 84.04 | 0.89 | 93.49 | 88.51 | 89.21 | 0.89 | 0.79 | 0.7898 |
Mobilenet v2 | 92.40 | 89.71 | 86.41 | 0.90 | 92.08 | 89.16 | 89.64 | 0.90 | 0.80 | 0.7953 |
ResNET-50 | 90.67 | 87.86 | 81.24 | 0.88 | 93.09 | 86.76 | 87.72 | 0.87 | 0.77 | 0.7624 |
DenseNET-201 | 91.10 | 88.54 | 83.25 | 0.88 | 92.61 | 87.68 | 88.44 | 0.88 | 0.78 | 0.5985 |
Inception V3 | 91.33 | 88.10 | 84.87 | 0.88 | 90.59 | 87.42 | 88.02 | 0.88 | 0.77 | 0.7632 |
Xception | 90.47 | 87.53 | 83.19 | 0.87 | 90.58 | 86.73 | 87.44 | 0.87 | 0.76 | 0.7525 |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Almaraz-Damian, J.-A.; Ponomaryov, V.; Sadovnychiy, S.; Castillejos-Fernandez, H. Melanoma and Nevus Skin Lesion Classification Using Handcraft and Deep Learning Feature Fusion via Mutual Information Measures. Entropy 2020, 22, 484. https://doi.org/10.3390/e22040484
Almaraz-Damian J-A, Ponomaryov V, Sadovnychiy S, Castillejos-Fernandez H. Melanoma and Nevus Skin Lesion Classification Using Handcraft and Deep Learning Feature Fusion via Mutual Information Measures. Entropy. 2020; 22(4):484. https://doi.org/10.3390/e22040484
Chicago/Turabian StyleAlmaraz-Damian, Jose-Agustin, Volodymyr Ponomaryov, Sergiy Sadovnychiy, and Heydy Castillejos-Fernandez. 2020. "Melanoma and Nevus Skin Lesion Classification Using Handcraft and Deep Learning Feature Fusion via Mutual Information Measures" Entropy 22, no. 4: 484. https://doi.org/10.3390/e22040484
APA StyleAlmaraz-Damian, J. -A., Ponomaryov, V., Sadovnychiy, S., & Castillejos-Fernandez, H. (2020). Melanoma and Nevus Skin Lesion Classification Using Handcraft and Deep Learning Feature Fusion via Mutual Information Measures. Entropy, 22(4), 484. https://doi.org/10.3390/e22040484