Skin Lesion Analysis towards Melanoma Detection Using Deep Learning Network
<p>Pre-processing for skin lesion image. First crop the center area and then proportionally resize to a lower resolution. (The numbers of image size are measured by pixels).</p> "> Figure 2
<p>Residual building blocks. (<b>a</b>) Plain identity shortcut; (<b>b</b>) Bottleneck; (<b>c</b>) Residual in Residual (RiR). (<b>a</b>,<b>b</b>) are adopted in the original FCRN-50 and FCRN-101.</p> "> Figure 3
<p>Flowchart of the Lesion Indexing Network (LIN). The framework contains two FCRN and a calculation unit for lesion index. (The numbers of image size are measured by pixels).</p> "> Figure 4
<p>Examples of skin lesion images with outlines (blue) and distance maps. The first column (<b>a</b>,<b>c</b>) shows the original lesion images and the second (<b>b</b>,<b>d</b>) shows the corresponding distance maps. The scales for the original lesion images are about 1300 pixels × 1000 pixels and 1000 pixels × 800 pixels, respectively. The numbers of image size of distance maps are measured by pixels. The numbers in color-bar represent corresponding weights.</p> "> Figure 5
<p>Example of superpixels. The original image (<b>a</b>) was subdivided into 996 pieces of superpixel areas (<b>b</b>) separated by black lines. The scale for the lesion image is 1022 pixels × 767 pixels.</p> "> Figure 6
<p>Flowchart of Lesion Feature Network (LFN).</p> "> Figure 7
<p>Examples of lesion images from ISIC 2017 and their masks. <b>The first row</b> shows the original images of different lesions. <b>The second row</b> shows the segmentation masks. <b>The third row</b> shows the superpixel mask for dermoscopic feature extraction. The scales for the lesion images are 1022 pixels × 767 pixels, 3008 pixels × 2000 pixels and 1504 pixels × 1129 pixels, respectively.</p> "> Figure 8
<p>Examples of skin lesion segmentation results produced by LIN for ISIC 2017 validation set. (<b>a</b>–<b>d</b>) are the results of Melanoma, while (<b>e</b>–<b>h</b>) are the results for Seborrheic keratosis and Nevus. The blue and red lines represent the segmentation results and ground truths.</p> "> Figure 8 Cont.
<p>Examples of skin lesion segmentation results produced by LIN for ISIC 2017 validation set. (<b>a</b>–<b>d</b>) are the results of Melanoma, while (<b>e</b>–<b>h</b>) are the results for Seborrheic keratosis and Nevus. The blue and red lines represent the segmentation results and ground truths.</p> "> Figure 9
<p>Loss curves of LIN trained with DR, DM and DR + DM.</p> ">
Abstract
:1. Introduction
- (1)
- Existing deep learning approaches commonly use two networks to separately perform lesion segmentation and classification. In this paper, we proposed a framework consisting of multi-scale fully-convolutional residual networks and a lesion index calculation unit (LICU) to simultaneously address lesion segmentation (task 1) and lesion classification (task 3). The proposed framework achieved excellent results in both tasks. Henceforth, the proposed framework is named as Lesion Indexing Network (LIN).
- (2)
- We proposed a CNN-based framework, named Lesion Feature Network (LFN), to address task 2, i.e., dermoscopic feature extraction. Experimental results demonstrate the competitive performance of our framework. To the best of our knowledge, we are not aware of any previous work proposed for this task. Hence, this work may become the benchmark for the following related research in the area.
- (3)
- We made detailed analysis of the proposed deep learning frameworks in several respects, e.g., the performances of networks with different depths; and the impact caused by adding different components (e.g., batch normalization, weighted softmax, etc.). This work provides useful guidelines for the design of deep learning networks in related medical research.
2. Methods
2.1. Lesion Segmentation and Classification (Task 1 & 3)
2.1.1. Pre-Processing
2.1.2. Data Augmentation
2.1.3. Lesion Indexing Network (LIN)
Network Architecture
Lesion Index Calculation Unit (LICU)
Implementation
2.2. Dermoscopic Feature Extraction (Task 2)
2.2.1. Superpixel Extraction
2.2.2. Data Augmentation
Random Sample
Patch Rotation
2.2.3. Lesion Feature Network (LFN)
2.2.4. Implementation
3. Performance Analysis
3.1. Datasets
3.2. Evaluation Metrics
3.2.1. Lesion Segmentation
3.2.2. Dermoscopic Feature Extraction and Lesion Classification
3.3. Lesion Indexing Network (LIN)
3.3.1. The Performance on Lesion Segmentation
Training with DR and DM
Experiments on the Multi-Scale Input Images
3.3.2. The Performance on Lesion Classification
Performance of LICU
3.4. Lesion Feature Network (LFN)
3.4.1. Analysis of Network Architecture
3.4.2. Performance of Weighted Softmax Loss (WSL)
3.4.3. Usage of Batch Normalization (BN)
4. Comparison with Benchmarks
4.1. Lesion Segmentation
4.2. Dermoscopic Feature Extraction
4.3. Lesion Classification
5. Conclusions
Acknowledgments
Author Contributions
Conflicts of Interest
References
- Jerant, A.F.; Johnson, J.T.; Sheridan, C.D.; Caffrey, T.J. Early detection and treatment of skin cancer. Am. Fam. Phys. 2000, 62, 381–382. [Google Scholar]
- Binder, M.; Schwarz, M.; Winkler, A.; Steiner, A.; Kaider, A.; Wolff, K.; Pehamberger, H. Epiluminescence microscopy. A useful tool for the diagnosis of pigmented skin lesions for formally trained dermatologists. Arch. Dermatol. 1995, 131, 286–291. [Google Scholar] [CrossRef] [PubMed]
- Celebi, M.E.; Wen, Q.; Iyatomi, H.; Shimizu, K.; Zhou, H.; Schaefer, G. A state-of-the-art survey on lesion border detection in dermoscopy images. In Dermoscopy Image Analysis; CRC Press: Boca Raton, FL, USA, 2015. [Google Scholar]
- Erkol, B.; Moss, R.H.; Stanley, R.J.; Stoecker, W.V.; Hvatum, E. Automatic lesion boundary detection in dermoscopy images using gradient vector flow snakes. Skin Res. Technol. 2005, 11, 17–26. [Google Scholar] [CrossRef] [PubMed]
- Celebi, M.E.; Aslandogan, Y.A.; Stoecker, W.V.; Iyatomi, H.; Oka, H.; Chen, X. Unsupervised border detection in dermoscopy images. Skin Res. Technol. 2007, 13. [Google Scholar] [CrossRef]
- Iyatomi, H.; Oka, H.; Celebi, M.E.; Hashimoto, M.; Hagiwara, M.; Tanaka, M.; Ogawa, K. An improved Internet-based melanoma screening system with dermatologist-like tumor area extraction algorithm. Comput. Med. Imag. Graph. 2008, 32, 566–579. [Google Scholar] [CrossRef] [PubMed]
- Celebi, M.E.; Kingravi, H.A.; Iyatomi, H.; Aslandogan, Y.A.; Stoecker, W.V.; Moss, R.H.; Malters, J.M.; Grichnik, J.M.; Marghoob, A.A.; Rabinovitz, H.S. Border detection in dermoscopy images using statistical region merging. Skin Res. Technol. 2008, 14, 347. [Google Scholar] [CrossRef] [PubMed]
- Norton, K.A.; Iyatomi, H.; Celebi, M.E.; Ishizaki, S.; Sawada, M.; Suzaki, R.; Kobayashi, K.; Tanaka, M.; Ogawa, K. Three-phase general border detection method for dermoscopy images using non-uniform illumination correction. Skin Res. Technol. 2012, 18, 290–300. [Google Scholar] [CrossRef] [PubMed]
- Xie, F.; Bovik, A.C. Automatic segmentation of dermoscopy images using self-generating neural networks seeded by genetic algorithm. Pattern Recognit. 2013, 46, 1012–1019. [Google Scholar] [CrossRef]
- Sadri, A.; Zekri, M.; Sadri, S.; Gheissari, N.; Mokhtari, M.; Kolahdouzan, F. Segmentation of dermoscopy images using wavelet networks. IEEE Trans. Biomed. Eng. 2013, 60, 1134–1141. [Google Scholar] [CrossRef] [PubMed]
- Celebi, M.E.; Wen, Q.; Hwang, S.; Iyatomi, H.; Schaefer, G. Lesion border detection in dermoscopy images using ensembles of thresholding methods. Skin Res. Technol. 2013, 19, e252–e258. [Google Scholar] [CrossRef] [PubMed]
- Peruch, F.; Bogo, F.; Bonazza, M.; Cappelleri, V.M.; Peserico, E. Simpler, faster, more accurate melanocytic lesion segmentation through MEDS. IEEE Trans. Biomed. Eng. 2014, 61, 557–565. [Google Scholar] [CrossRef] [PubMed]
- Gómez, D.D.; Butakoff, C.; Ersbøll, B.K.; Stoecker, W. Independent histogram pursuit for segmentation of skin lesions. IEEE Trans. Biomed. Eng. 2008, 55, 157–161. [Google Scholar] [CrossRef] [PubMed]
- Zhou, H.; Schaefer, G.; Sadka, A.; Celebi, M.E. Anisotropic mean shift based fuzzy c-means segmentation of skin lesions. IEEE J. Sel. Top. Signal Process. 2009, 3, 26–34. [Google Scholar] [CrossRef] [Green Version]
- Zhou, H.; Schaefer, G.; Celebi, M.E.; Lin, F.; Liu, T. Gradient vector flow with mean shift for skin lesion segmentation. Comput. Med. Imaging Graph. 2011, 35, 121–127. [Google Scholar] [CrossRef] [PubMed]
- Zhou, H.; Li, X.; Schaefer, G.; Celebi, M.E.; Miller, P. Mean shift based gradient vector flow for image segmentation. Comput. Vis. Image Underst. 2013, 117, 1004–1016. [Google Scholar] [CrossRef] [Green Version]
- Garnavi, R.; Aldeen, M.; Celebi, M.E.; Varigos, G.; Finch, S. Border detection in dermoscopy images using hybrid thresholding on optimized color channels. Comput. Med. Imaging Graph. 2011, 35, 105–115. [Google Scholar] [CrossRef] [PubMed]
- Pennisi, A.; Bloisi, D.D.; Nardi, D.; Giampetruzzi, A.R.; Mondino, C.; Facchiano, A. Skin lesion image segmentation using delaunay triangulation for melanoma detection. Comput. Med. Imaging Graph. 2016, 52, 89–103. [Google Scholar] [CrossRef] [PubMed]
- Ma, Z.; Tavares, J. A novel approach to segment skin lesions in dermoscopic images based on a deformable model. IEEE J. Biomed. Health Inform. 2017, 20, 615–623. [Google Scholar] [CrossRef] [PubMed]
- Yu, L.; Chen, H.; Dou, Q.; Qin, J.; Heng, P.A. Automated melanoma recognition in dermoscopy images via very deep residual networks. IEEE Trans. Med. Imaging 2017, 36, 994–1004. [Google Scholar] [CrossRef] [PubMed]
- Celebi, M.E.; Kingravi, H.A.; Uddin, B.; Iyatomi, H.; Aslandogan, Y.A.; Stoecker, W.V.; Moss, R.H. A methodological approach to the classification of dermoscopy images. Comput. Med. Imaging Graph. 2007, 31, 362–373. [Google Scholar] [CrossRef] [PubMed]
- Celebi, M.E.; Iyatomi, H.; Schaefer, G.; Stoecker, W.V. Lesion border detection in dermoscopy images. Comput. Med. Imaging Graph. 2009, 33, 148–153. [Google Scholar] [CrossRef] [PubMed]
- Schaefer, G.; Krawczyk, B.; Celebi, M.E.; Iyatomi, H. An ensemble classification approach for melanoma diagnosis. Memet. Comput. 2014, 6, 233–240. [Google Scholar] [CrossRef]
- Stanley, R.J.; Stoecker, W.V.; Moss, R.H. A relative color approach to color discrimination for malignant melanoma detection in dermoscopy images. Skin Res. Technol. 2007, 13, 62–72. [Google Scholar] [CrossRef] [PubMed]
- Hospedales, T.; Romero, A.; Vázquez, D. Guest editorial: Deep learning in computer vision. IET Comput. Vis. 2017, 11, 621–622. [Google Scholar]
- Sulistyo, S.B.; Woo, W.L.; Dlay, S.S. Regularized neural networks fusion and genetic algorithm based on-field nitrogen status estimation of wheat plants. IEEE Trans. Ind. Inform. 2017, 13, 103–114. [Google Scholar] [CrossRef]
- Sulistyo, S.B.; Wu, D.; Woo, W.L.; Dlay, S.S.; Gao, B.; Member, S. Computational deep intelligence vision sensing for nutrient content estimation in agricultural automation. IEEE Trans. Autom. Sci. Eng. 2017, in press. [Google Scholar] [CrossRef]
- Sulistyo, S.; Woo, W.L.; Dlay, S.; Gao, B. Building a globally optimized computational intelligent image processing algorithm for on-site nitrogen status analysis in plants. IEEE Intell. Syst. 2018, in press. [Google Scholar] [CrossRef]
- Codella, N.; Cai, J.; Abedini, M.; Garnavi, R.; Halpern, A.; Smith, J.R. Deep learning, sparse coding, and svm for melanoma recognition in dermoscopy images. In International Workshop on Machine Learning in Medical Imaging; Springer: Cham, Switzerland, 2015; pp. 118–126. [Google Scholar]
- Codella, N.; Nguyen, Q.B.; Pankanti, S.; Gutman, D.; Helba, B.; Halpern, A.; Smith, J.R. Deep learning ensembles for melanoma recognition in dermoscopy images. IBM J. Res. Dev. 2016, 61. [Google Scholar] [CrossRef]
- Kawahara, J.; Bentaieb, A.; Hamarneh, G. Deep features to classify skin lesions. In Proceedings of the 2016 IEEE 13th International Symposium onpp. Biomedical Imaging (ISBI), Prague, Czech Republic, 13–16 April 2016; pp. 1397–1400. [Google Scholar]
- Li, Y.; Shen, L.; Yu, S. HEp-2 specimen image segmentation and classification using very deep fully convolutional network. IEEE Trans. Med. Imaging 2017, 36, 1561–1572. [Google Scholar] [CrossRef] [PubMed]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Vedaldi, A.; Lenc, K. MatConvNet—Convolutional neural networks for MATLAB. In Proceedings of the ACM International Conference on Multimedia, Brisbane, Australia, 26–30 October 2015; pp. 689–692. [Google Scholar]
- Achanta, R.; Shaji, A.; Smith, K.; Lucchi, A.; Fua, P.; SüSstrunk, S. SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 2274–2282. [Google Scholar] [CrossRef] [PubMed]
- Lin, M.; Chen, Q.; Yan, S. Network in network. arXiv, 2013; arXiv:1312.4400. [Google Scholar]
- Lecun, Y.; Boser, B.; Denker, J.S.; Henderson, D.; Howard, R.E.; Hubbard, W.; Jackel, L.D. Backpropagation applied to handwritten zip code recognition. Neural Comput. 1989, 1, 541–551. [Google Scholar] [CrossRef]
- Codella, N.C.F.; Gutman, D.; Celebi, E.; Helba, B.; Marchetti, A.M.; Dusza, W.S.; Kalloo, A.; Liopyris, K.; Mishra, N.; Kittler, H.; et al. Skin lesion analysis toward melanoma detection: A challenge at the 2017 international symposium on biomedical imaging (ISBI), hosted by the international skin imaging collaboration (ISIC). arXiv, 2017; arXiv:1710.05006. [Google Scholar]
- Ioffe, S.; Szegedy, C. Batch Normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd International Conference on Machine Learning, Lille, France, 6–11 July 2015; pp. 448–456. [Google Scholar]
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar]
- Shelhamer, E.; Long, J.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer Assisted Interventions, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
- Wen, H. II-FCN for skin lesion analysis towards melanoma detection. arXiv, 2017; arXiv:1702.08699. [Google Scholar]
- Attia, M.; Hossny, M.; Nahavandi, S.; Yazdabadi, A. Spatially aware melanoma segmentation using hybrid deep learning techniques. arXiv, 2017; arXiv:1702.07963. [Google Scholar]
- Kawahara, J.; Hamarneh, G. Fully convolutional networks to detect clinical dermoscopic features. arXiv, 2017; arXiv:1703.04559. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. In Proceedings of the International Conference on Neural Information Processing Systems, Lake Tahoe, Nevada, 3–6 December 2012; pp. 1097–1105. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv, 2015; arXiv:1409.1556. [Google Scholar]
Melanoma (18°) | Seborrheic Keratosis (18°) | Nevus (45°) | |
---|---|---|---|
Original | 374 | 254 | 1372 |
DR | 7480 | 5080 | 10,976 |
Original | Random Sample + Rotation | |
---|---|---|
Background (B) | >90,000 | 87,089 |
Pigment Network (PN) | >80,000 | 77,325 |
Negative Network (NN) | ~3000 | 12,908 |
Milia-like Cysts (MC) | ~5000 | 18,424 |
Streaks (S) | ~2000 | 8324 |
Model | JA |
---|---|
FCRN-88 (DR) | 0.697 |
FCRN-88 (DR + DM) | 0.607 |
LIN (ours) | 0.710 |
Model | JA |
---|---|
LIN (~300) | 0.710 |
LIN (~500) | 0.698 |
LIN (~700) | 0.662 |
LIN (~300 + ~500) | 0.751 |
LIN (~300 + ~500 + ~700) | 0.753 |
Model | AUC |
---|---|
LIN without LICU | 0.891 |
LIN with LICU | 0.912 |
LFN | Narrow LFN | Wide LFN | |
---|---|---|---|
Stage 1 | 16, (3,3) | 16, (3,3) | 32, (3,3) |
16, (1,1) | 16, (1,1) | 32, (1,1) | |
16, (3,3) | 16, (3,3) | 32, (3,3) | |
Stage 2 | 32, (3,3) | 16, (3,3) | 64, (3,3) |
32, (1,1) | 16, (1,1) | 64, (1,1) | |
32, (3,3) | 16, (3,3) | 64, (3,3) | |
Stage 3 | 64, (3,3) | 16, (3,3) | 64, (3,3) |
64, (1,1) | 16, (1,1) | 64, (1,1) | |
64, (3,3) | 16, (3,3) | 64, (3,3) | |
Stage 4 | 128, (3,3) | 32, (3,3) | 128, (3,3) |
128, (1,1) | 32, (1,1) | 128, (1,1) | |
128, (3,3) | 32, (3,3) | 128, (3,3) |
Model | AUC |
---|---|
Narrow LFN | 0.822 |
Wide LFN | 0.803 |
LFN | 0.848 |
LFN (without WSL) | 0.778 |
LFN (without BN) | 0.842 |
Method | JA | AC | DC | SE | SP |
---|---|---|---|---|---|
FCN-8s [41] | 0.696 | 0.933 | 0.783 | 0.806 | 0.954 |
U-net [42] | 0.651 | 0.920 | 0.768 | 0.853 | 0.957 |
II-FCN [43] | 0.699 | 0.929 | 0.794 | 0.841 | 0.984 |
Auto-ED [44] | 0.738 | 0.936 | 0.824 | 0.836 | 0.966 |
LIN (ours) | 0.753 | 0.950 | 0.839 | 0.855 | 0.974 |
Method | AUC | AC | AP | SE | SP |
---|---|---|---|---|---|
J. Kawahara [45] | 0.893 | 0.985 | 0.185 | 0.534 | 0.987 |
LFN (ours) | 0.848 | 0.902 | 0.422 | 0.693 | 0.902 |
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, Y.; Shen, L. Skin Lesion Analysis towards Melanoma Detection Using Deep Learning Network. Sensors 2018, 18, 556. https://doi.org/10.3390/s18020556
Li Y, Shen L. Skin Lesion Analysis towards Melanoma Detection Using Deep Learning Network. Sensors. 2018; 18(2):556. https://doi.org/10.3390/s18020556
Chicago/Turabian StyleLi, Yuexiang, and Linlin Shen. 2018. "Skin Lesion Analysis towards Melanoma Detection Using Deep Learning Network" Sensors 18, no. 2: 556. https://doi.org/10.3390/s18020556
APA StyleLi, Y., & Shen, L. (2018). Skin Lesion Analysis towards Melanoma Detection Using Deep Learning Network. Sensors, 18(2), 556. https://doi.org/10.3390/s18020556