Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Cooperation on Interdependent Networks by Means of Migration and Stochastic Imitation
Previous Article in Journal
Correction: Li, Q.; Liang, S.Y. Incipient Fault Diagnosis of Rolling Bearings Based on Impulse-Step Impact Dictionary and Re-Weighted Minimizing Nonconvex Penalty Lq Regular Technique. Entropy 2017, 19, 421
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Melanoma and Nevus Skin Lesion Classification Using Handcraft and Deep Learning Feature Fusion via Mutual Information Measures

by
Jose-Agustin Almaraz-Damian
1,†,
Volodymyr Ponomaryov
1,*,†,
Sergiy Sadovnychiy
2,† and
Heydy Castillejos-Fernandez
3,†
1
Instituto Politecnico Nacional, Santa Ana Ave. # 1000, Mexico City 04430, Mexico
2
Instituto Mexicano del Petroleo, Lazaro Cardenas Ave. # 152, Mexico City 07730, Mexico
3
Academic Area of Computer and Electronics, Institute of Basic Sciences and Engineering, Universidad Autonoma del Estado de Hidalgo, Pachuca–Tulancingo Highway Km. 4.5, Mineral de la Reforma, Hidalgo 42083, Mexico
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Entropy 2020, 22(4), 484; https://doi.org/10.3390/e22040484
Submission received: 26 February 2020 / Revised: 16 April 2020 / Accepted: 20 April 2020 / Published: 23 April 2020
Figure 1
<p>Block diagram of the novel Computer-Aided Detection (CAD) system.</p> ">
Figure 2
<p>(<b>a</b>) Original image <math display="inline"><semantics> <mrow> <mi>I</mi> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </semantics></math>; (<b>b</b>) image (<b>a</b>) processed with a Gaussian filter.</p> ">
Figure 3
<p>(<b>a</b>) Original image <math display="inline"><semantics> <mrow> <mi>I</mi> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </semantics></math>, (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>I</mi> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </semantics></math> on channel L, (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>I</mi> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </semantics></math> on channel a*, and (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>I</mi> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </semantics></math> on channel b*.</p> ">
Figure 4
<p>Results of the thresholding stage: (<b>a</b>) Original image <math display="inline"><semantics> <mrow> <mi>I</mi> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </semantics></math>, (<b>b</b>) binary image <math display="inline"><semantics> <msub> <mi>I</mi> <mrow> <mi>T</mi> <msub> <mi>h</mi> <mi>L</mi> </msub> </mrow> </msub> </semantics></math> obtained from the threshold of the L channel, (<b>c</b>) binary image <math display="inline"><semantics> <msub> <mi>I</mi> <mrow> <mi>T</mi> <msub> <mi>h</mi> <mi>a</mi> </msub> </mrow> </msub> </semantics></math> obtained from the threshold of the a* channel, (<b>d</b>) binary image <math display="inline"><semantics> <msub> <mi>I</mi> <mrow> <mi>T</mi> <msub> <mi>h</mi> <mi>b</mi> </msub> </mrow> </msub> </semantics></math> obtained from the threshold of the b* channel.</p> ">
Figure 5
<p>Results of the preprocessing stage: Original image <math display="inline"><semantics> <mrow> <mi>I</mi> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </semantics></math>, (<b>a</b>) Region of Interest (ROI) obtained, (<b>b</b>) segmented image <math display="inline"><semantics> <msub> <mi>S</mi> <msub> <mi>I</mi> <mrow> <mi>b</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </msub> </semantics></math>, (<b>c</b>) obtained asymmetry at 0°, and (<b>d</b>) obtained asymmetry at 90°.</p> ">
Figure 6
<p>Distribution of classes on the ISIC2018/HAM10000 dataset.</p> ">
Review Reports Versions Notes

Abstract

:
In this paper, a new Computer-Aided Detection (CAD) system for the detection and classification of dangerous skin lesions (melanoma type) is presented, through a fusion of handcraft features related to the medical algorithm ABCD rule (Asymmetry Borders-Colors-Dermatoscopic Structures) and deep learning features employing Mutual Information (MI) measurements. The steps of a CAD system can be summarized as preprocessing, feature extraction, feature fusion, and classification. During the preprocessing step, a lesion image is enhanced, filtered, and segmented, with the aim to obtain the Region of Interest (ROI); in the next step, the feature extraction is performed. Handcraft features such as shape, color, and texture are used as the representation of the ABCD rule, and deep learning features are extracted using a Convolutional Neural Network (CNN) architecture, which is pre-trained on Imagenet (an ILSVRC Imagenet task). MI measurement is used as a fusion rule, gathering the most important information from both types of features. Finally, at the Classification step, several methods are employed such as Linear Regression (LR), Support Vector Machines (SVMs), and Relevant Vector Machines (RVMs). The designed framework was tested using the ISIC 2018 public dataset. The proposed framework appears to demonstrate an improved performance in comparison with other state-of-the-art methods in terms of the accuracy, specificity, and sensibility obtained in the training and test stages. Additionally, we propose and justify a novel procedure that should be used in adjusting the evaluation metrics for imbalanced datasets that are common for different kinds of skin lesions.

1. Introduction

Skin cancer has become one of the deadliest diseases for human beings. Globally, each year, between two and three million non-melanoma (less aggressive) cases occur, and over 130,000 melanoma (aggressive) types are diagnosed [1].
Melanoma is the deadliest type of skin cancer. Australia has the highest rates of skin cancer in the world. In 2018, melanoma accounted for about 22 % of skin cancer diagnoses, and non-melanoma tumors accounted for about 78 % [2]. Studies have shown that this disease is caused most of the time by exposure to UV radiation in daylight, tanning on sunbeds, and skin color, among others. Physicians have suggested that the best way to detect a malignant skin lesion of any kind is early detection. The rate of survival increases to almost 99 % over five years if the disease is spotted in the early stages.
Dermoscopy or Epiluminicence Microscopy (ELM) is a medical method that helps a physician to recognize if a skin lesion belongs to a benign or malignant type of the disease. This method uses a dermatoscope, a tool that consists of a light source and amplification lens to enhance the view of medical patterns such as ramifications, globs, pigmented networks, veils, and colors, among others.
Since the image processing techniques were developed, Computer-Aided Detection (CAD) systems and approaches in the classification [3,4,5,6,7] and segmentation [8] of a Pigmented Skin Lesion (PSL) have been improved, benefiting patient diagnoses in early stages of the disease without shocking or painful medical procedures.
In this work, we propose a novel approach in the detection of a skin lesion among melanoma or nevus types, using handcraft features that depend on shape, color, and texture, which represent the ABCD rule (Asymmetry Borders-Colors-Dermatoscopic Structures), and combining them with deep learning features; these latter features were extracted using the transfer learning method as a generic feature extractor. At the next step, the most important features according to the Mutual Information (MI) metric features should be selected using the fusion technique, aiming at the best performance by taking into account the influences of both sets of features on the binary classification result.
This paper is organized as follows: Section 2 presents a brief review of the methods used in CAD developments with fused features and their nature, Section 3 explains in detail the proposed method, the materials used, and the evaluation metrics employed, and Section 4 describes the experimental results and presents a brief discussion. The conclusions are detailed in Section 5.

2. Literature Survey

Medical detection algorithms are one of the first tools used for determining whether a skin lesion is malignant or benign [9,10,11,12,13]. Nachbar et al. [9] developed a subjective method based on the visual perception of the lesion. ABCD Rule is based on color, shape, and particular structures that appear on skin lesions. Due to the simplicity of the algorithm, it is one of the most practiced for evaluating a lesion by a naked eye exam or using a dermatoscope.
The ABCD medical algorithm is composed of the following parts:
  • Asymmetry A: The lesion is bisected into two perpendicular axes at 90° of each other, so as to yield the lowest possible asymmetry score. In other words, whether the lesion is symmetrical or not is determined. For each axis, where asymmetry is found, one point is added.
  • Borders B: The lesion is divided into slices by eight axes determining whether a lesion has abrupt borders. If one segment presents an abrupt border, one point is added.
  • Colors C: The lesion can contain one or more of the following colors: white, brownish, dark brown, black, blue, and red. They are generated by vessels and melanin concentrations, so for each one color founded, one point per color is added.
  • Dermatoscopic Structures D: The lesion has the appearance of the following structures: dots, blobs, pigmented networks, and non-structured areas. A point is added for each structure spotted on the lesion.
The described features are weighted as follows:
T D S = 1.3 A + 0.1 B + 0.5 C + 0.5 D ,
where TDS is the Total Dermatoscopic Score; if it is less than 4.75, it is concluded that the lesion is benign; if the score is between 4.75 and 5.45, the lesion is considered suspicious; if it is more than 5.45, then it is considered malignant.
This algorithm has subversions, where some elements such as dermatoscopic structures are changed by diameter or darkness [14,15]. Additionally, in [16,17], the addition of features known as EFG properties has been suggested; E stands for elevation or evolution, F for firmness, and G for growth. These features work as complementary information obtained from a PSL. Modifications in the ABCD rule are based on simplifying the evaluation of a skin lesion such that anyone can evaluate themselves and record any change in the injury. If the lesion is not identified by these methods, physicians are obliged to initiate invasive methods such as a biopsy to determine their type.
Adjed et al. [18] proposed a method where the aim is the fusion of structural features using Curvelet and Wavelet transform employing the Fast Digital Curvelet Transform (FDCT) wrapping method, and statistical metrics and texture features such as local binary pattern are then computed. They fused around 200 features via concatenation using the PH2 dataset [19].
Hagerty et al. [20] developed a fusion method where deep features are extracted from images using transfer learning method based on the ResNET-50 Convolutional Neural Network (CNN) architecture. However, the question as to which handcraft features are used in their method is not clear. Moreover, they used a feature selection algorithm, in this case, the χ 2 method, for performance revision, employing two datasets: the private set and the second set (a modified version of the ISIC 2018 dataset) [21].
Li et al. [22] used a deep learning approach with the fusion of clinical criteria representations, where as a classifier and fusion method, a boosting tree-learning algorithm called LightGBM is used [23]. This method is applied for color properties (RGB and HSL features), texture properties (SIFT and LBP), and shape properties (solidity and circularity, image ratio, and area ratio). The deep learning features were obtained using the transfer learning method based on the ResNET-50 and DenseNET-201 CNN architectures. Data pertaining to 566 features were processed using the ISIC 2018 dataset [21].
Abbas and Celebi [24] proposed a CAD system where the lesion is processed by a Stack-Based Auto-Encoder (SAE), extracting the deep features from the pixels of a lesion while minimizing the information loss. The handcraft features are extracted for color (the Hill climbing algorithm (HCA)) and for texture (the speed-up robust features (SURF)). Applying a feature fusion approach, they used Principal Component Analysis (PCA), and in the concluding stage, Recurrent Neural Networks (RNN) and A Softmax Linear Classifier were employed.
Among the reviewed methods, most of them use handcraft features and deep learning features with the help of the transfer learning method based on well-known CNN architectures [25,26,27,28,29,30,31]. As one can see, the revised schemes tried to fuse the information extracted from the lesion images, gathering data via the concatenation of the feature vectors, classifiers, and feature selection. The main drawback of such methods is the unawareness of the medical information, which is relevant to physicians besides the extracted data from image processing algorithms. The analyzed methods above employ several possibilities in the fusion of features, but most of them do not consider the importance of each extracted feature according to their nature, which is relevant for a pertinence class. Moreover, some of them lack features based on medical algorithms due to their assumption of the weakness of the perceptual handcraft features based on the subjective visual human system. Modern image processing techniques and machine learning approaches are able to learn the patterns and implement those features, as in a vision scoring system. Finally, some of the reviewed methods attempt to perform a multiclass classification, where the system classifies a lesion image to a specific lesion category. Nevertheless, here a problem endures in the multiclass classification, because the data available for each class are limited, and some of the public databases are not well balanced to perform this classification. As a result, a designed system can perform incorrect classifications. Summarizing, we consider the importance of developing an intelligent system that is able to perform the correct classification of melanoma disease employing both types of features, where medical features have relevance on the classification with the aid of deep learning features, aiming for the best performance.
The novel method considers relevant information obtained from handcraft and deep learning features, improving performance quality presented by commonly used criteria: accuracy, specificity, and sensibility. Different from other schemes, our novel framework encourages the use of ABCD rule features, also known as perceptual features, with a set of features equivalent to or based on a similar medical nature.

Principal Contributions

The principal contributions of our novel approach in the classification of dermoscopic lesions are summarized as follows:
  • A brief survey of computer-aided detection methods that employ fusion between handcraft and deep learning features is presented.
  • Despite the new tendencies of avoiding the ABCD medical algorithm or any of its variations, we utilized descriptors based on them, such as shape, color, and texture, as a new aggregation, and the extraction of deep learning features were used afterwards.
  • A balanced method was employed due to the inconsistency of the ISIC database with respect to classes. A SMOTE oversampling technique was applied, which in this work demonstrates an improvement in performance at the differentiation of melanoma and benign lesion images.
  • A fusing method that employs relevant mutual information obtained from handcraft and deep learning features was designed, and it appears to demonstrate better performance in comparison with state-of-the-art CAD systems.

3. Materials and Methods

In this section, the proposed system is described. A brief conceptual block diagram of the system is illustrated in Figure 1. As an initial step, the pigmented skin lesion image is segmented from surrounding normal skin tissue and artefacts such as veils, hairs, and air bubbles, among others, by color space transformation, mean thresholding, and extraction of the Region of Interest (ROI). Subsequently, using the binary mask image, the ROI image and a set of handcraft features based on shape, color, and texture are extracted. Thereafter, deep learning features are obtained using a selected CNN architecture, which is pre-trained in an Imagenet classification task; this CNN is employed as a feature extractor. All extracted features are concatenated in one vector, which later is fused according to the MI criterion. The selected classifier is trained on the ISIC dataset comprised of both malignant and other benign skin lesion images. Finally, the trained classifier model is used to predict each unseen pigmented skin lesion image as a benign or malignant lesion. The details of each stage of the proposed method are described in the remainder of this section.

3.1. Preprocessing

An image I ( x , y ) that is analyzed can contain a lesion with some artefacts such as veils, hairs, stamps, among others. In the first step, we apply a preprocessing stage, where an image is enhanced [8,32,33].
A Gaussian filter is applied, and this filter is used to blur the artefacts contained on the image as primary targets: hair, marks, and spots, among others. This maintains the geometric/shape form of the lesion.
The Gaussian filter is denoted as follows:
G ( x , y ) = 1 2 π σ 2 exp x 2 + y 2 2 σ 2 ,
where σ 2 is the variance of the spatial kernel, this step is shown in Figure 2.
CIEL*a*b* space is characterized to more closely approximate the human perception system, where there are channels: L stands for lightness; a* and b* stand for chroma channels, where a* is a parametric measure between magenta and green, and b* is a parametric measure between blue and yellow. The L channel presents values between [ 0 , 100 ] , and a* and b* chroma channels have values between [ 30 , 30 ] . This transformation is used to avoid the correlation between channels, while keeping the perceptual data intact, such as a pigmented skin lesion that is darker than healthy skin tissue, as one of the sub-variants of the ABCD algorithm states [14].
In each channel of CIEL*a*b* for images I L , I a , I b , the mean thresholding procedure is applied. Such thresholding allows one to differentiate skin tissue from lesion tissue. In Figure 3, one can see how the CIEL*a*b* space is able to visually separate this information. These mean values are calculated as follows:
I L ¯ = x = 1 m y = 1 n 1 m n I L ( x , y ) ,
I a ¯ = x = 1 m y = 1 n 1 m n I a ( x , y ) ,
I b ¯ = x = 1 m y = 1 n 1 m n I b ( x , y ) ,
where ( x , y ) are the spatial coordinates, m , n are the sizes of an image, and I L ¯ , I a ¯ , I b ¯ denote the mean values. The thresholding operation is applied in each channel of the CIEL*a*b* space, forming the thresholded channel images as follows:
I T h L ( x , y ) = { 1 , I L ( x , y ) I L ¯ , 0 , o t h e r w i s e ,
I T h a ( x , y ) = { 1 , I a ( x , y ) I a ¯ , 0 , o t h e r w i s e ,
I T h b ( x , y ) = { 1 , I b ( x , y ) I b ¯ , 0 , o t h e r w i s e .
Afterwards, the following logic operation is applied on each binarized image, I T h L , I T h a , and I T h b , to form a binary mask I b i n ( x , y ) of the image.
I b i n ( x , y ) = I T h L I T h a I T h b .
Example of extracted Binary Masks images are given in Figure 4. Finally, a median filter with a kernel 5 × 5 is applied to the I b i n ( x , y ) , removing the remaining artefacts, which resists thresholding. Next, a bounding box algorithm is performed. The bounding box [34] is a method that is used to compute an imaginary rectangle that completely encloses the given object. This rectangle can be determined by the x- and y-axis coordinates in the upper-left and lower-right corners of a shape. This method is commonly used in object detection tasks because it estimates the coordinates of the ROI in an image.
Bissotto et al. [35] have shown the effect of the bias between different types of image segmentation, where those biases can negatively affect the performance of classification models. They consider that the usage of the bounding box algorithm to segment a lesion is appropriate because a CNN architecture can extract all the relevant features of a lesion and distinguish it from the surrounding healthy skin. Therefore, we consider this solution to reduce the bias of the classification model before processing.

3.2. Handcraft Features

The ABCD rule represents a set of perceptual features stated by the findings of patterns in PSLs. The ABCD method employs features that are based mostly on shapes, color, and texture. The selected features in this study are the representation of medical attributes using image processing algorithms.
Sirakov et al. [36] proposed a method to estimate the asymmetry of a lesion based on the binary mask I b i n , which is obtained from the previous thresholding step. Then, by rotating it through 180°, the symmetry mask S I b i n is formed, and the synthetic image A is calculated as follows:
A = I b i n S I b i n
where A is the generated image, which contains the non-overlapping regions of the lesion called false symmetry F S , therefore applying
S y m 0 ° = 1 ( F S / A ) ,
and this technique is applied on the 0° axis of the binary image.
In this study, a variation of the previous method is proposed, where from the A generated image is rotated from the major axis and the minor axis, by applying the same procedure again and finally, computing the average symmetry value between the two axes, as follows:
S y m m e t r y = S y m 0 ° + S y m 90 ° 2 .
The symmetry values belong to the interval [0, 1], where, if this index approaches the highest value (1), a lesion is more symmetric. The Figure 5 shown the extracted ROI images.

3.2.1. Shape Features

Shape features or geometric features [34] can describe an object or a form in numerical values to represent human perception.
For shape features, the following equations are employed:
A r e a = x = 1 m y = 1 n I b i n ( x , y ) ,
where m , n are the sizes of the image, and x , y are the spatial coordinates; therefore, the area consists of the amount of pixels contained in the ROI of a lesion.
P e r i m e t e r = i = 1 m ( x 1 x i 1 ) + ( y 1 y i 1 ) 2 ,
where ( x , y ) are the spatial coordinates of the i t h pixel, which constructs the contour of the region, and the perimeter contains the amount of pixels around the ROI of a lesion.
C i r c u l a r i t y = 4 π · A r e a P e r i m e t e r 2 ,
where the circularity shows the similarity between a shape and a circle.
D i a m e t e r = 1 2 ( μ 2 , 0 + μ 0 , 2 ) ± 4 μ 1 , 1 2 ( μ 2 , 0 μ 0 , 2 ) 2 2 2 ,
where the diameter is formed by obtaining the length of the major axis and the minor axis of the shape, computed from the 2nd central moment. This measure connects two pairs of points on the perimeter of the shape.
E c c e n t r i c i t y = ( μ 0 , 2 μ 2 , 0 ) 2 + 4 μ 1 , 1 A ,
which measures the aspect ratio of the length of the major axis to the length of the minor axis.

3.2.2. Colour Features

Medical algorithms, in particular the ABCD rule, try to present, as features, a set of colors contained on a PSL. Therefore, these features can be replaced by statistical characteristics obtained from color spaces. In this study, the following characteristics are used:
M i n c h = m i n [ I c h ( x , y ) ] ,
M a x c h = m a x [ I c h ( x , y ) ] ,
V a r c h = v a r [ I c h ( x , y ) ] ,
M e a n c h = [ I c h ( x , y ) ] ¯ ,
where I c h ( x , y ) is the image of a chosen channel for the PSL image in RGB and CIEL*a*b* color spaces.

3.2.3. Texture Features

Haralick et al. [37] proposed the Gray Level Co-occurrence Matrix (GLCM). This method analyzes the statistical texture features of an image. The texture features provide information about how the gray intensities of the PSL of the image are distributed. GLCM shows how often a gray level occurs at a pixel located in a fixed position, using P d ( i , j ) as the ( i , j ) element of the normalized GLCM; N g is the number of gray levels; σ x , σ y and μ x , μ y are the standard derivations and the mean values among the i and j axes of the GLCM, and are expressed as follows:
μ x = i = 1 N g j = 1 N g i P d ( i , j ) ,
μ y = i = 1 N g j = 1 N g j P d ( i , j ) ,
σ x = i = 1 N g j = 1 N g ( i μ x ) 2 P d ( i , j ) ,
σ y = i = 1 N g j = 1 N g ( j μ y ) 2 P d ( i , j ) .
The 13 features used in this study are as follows:
A S M = i = 1 N g j = 1 N g P d 2 ( i , j ) .
The angular second moment measures consistency of the gray local values.
C o n t r a s t = i = 1 N g j = 1 N g ( i j ) 2 P d ( i , j ) ,
This is the second moment. This characteristic measures the variations between pixels.
C o r r e l a t i o n = i = 1 N g j = 1 N g P d ( i , j ) ( i μ x ) ( j μ y ) σ x σ y .
This is the linear dependency of the gray level values.  
V a r i a n c e = i = 1 N g j = 1 N g ( i μ ) 2 P d ( i , j ) .
This is the second moment. It shows the spread around the mean in the surrounding neighborhood.
I D M = i = 1 N g j = 1 N g 1 1 + ( i j ) 2 P d ( i , j ) .
This is the Inverse Difference Moment (IDM). It shows how close the elements of the GLCM are in their distribution.
E n t r o p y = i = 1 N g j = 1 N g P d ( i , j ) · l n [ P d ( i , j ) ] .
This is the measure of randomness of the gray values in an image.
Additional texture features that are used in this study are based on the difference statistics using the probability P x y ( k ) that can be written as follows:
P x y ( k ) = i = 1 N g j = 1 N g P d ( i , j ) , k = 0 , 1 , , N g 1 ,
where P d ( i , j ) is the ( i , j ) th element contained in the GLCM, and N g is the number of gray levels.
S u m V a r i a n c e = k = 2 2 N g ( k μ x + y ) 2 P x + y ( k ) ,
S u m E n t r o p y = k = 2 2 N g P x + y ( k ) l o g [ P x + y ( k ) ] ,
D i f f e r e n c e V a r i a n c e = k = 0 N G 1 ( k μ x y ) 2 P x y ( k ) ,
D i f f e r e n c e E n t r o p y = k = 0 N g 1 P x y ( k ) l o g [ P x y ( k ) ] ,
I M C o r r 1 = H ( X Y ) H ( X Y 1 ) m a x [ H ( X ) H ( Y ) ] ,
I M C o r r 2 = 1 exp { 2 [ H ( X Y 2 ) H ( X Y ) ] } ,
where H ( X ) , H ( Y ) , H ( X Y ) , H ( X Y 1 ) and H ( X Y 2 ) are denoted as follows:
H ( X ) = i = 1 N g P x ( i ) · l o g [ P x ( i ) ] ,
H ( Y ) = i = 1 N g P y ( i ) · l o g [ P y ( i ) ] ,
H ( X Y ) = i = 1 N g P d ( i , j ) · l o g [ P d ( i , j ) ] ,
H ( X Y 1 ) = i = 1 N g j = 1 N g P d ( i , j ) · l o g [ P x ( i ) · P y ( j ) ] ,
H ( X Y 2 ) = i = 1 N g j = 1 N g P x ( i ) · P y ( j ) · l o g [ P x ( i ) · P y ( j ) ] .

3.3. Deep Learning Features

Based on the discrete convolution operation,
W ( i , j ) = ( K I ) ( i , j ) = m n I ( i m , j n ) K ( m , n ) ,
the CNN [38,39] represents one type of method, namely, deep learning strategies [40], the basis of which is to obtain the information of an image I ( i , j ) , using filters K ( m , n ) , which are trained on a neural network as feed-forward and back-propagation, according to
y = z ( W X ) + b ,
where W are the computed values for the filters, z is the activation function, X is the input, and b is the bias [40,41].
The design of an architecture of a CNN is a rather complex task due to the statement of parameters such as the number and size of the filters, and the depth, even those that are task-related.

3.3.1. Transfer Learning

The main problem of using a deep learning approach is that a large amount of data is needed to train the network from scratch. Usually, to overcome this problem, the transfer learning method [42,43,44] is applied.
Transfer learning is a technique that can be defined as the generalization of a target task based on applied knowledge extracted from one or more source tasks [43]. The idea originates from human thinking: we do not learn exactly how to recognize a chair, a plane, or a book; we start by recognizing colors, shapes, and textures, and someone else then tells us how to differentiate a chair from an apple. This sharing of knowledge between beings helps us to understand the world as infants. Another idea is handling information collected for a task to solve related ones. Therefore, transfer learning can be defined as follows:
Assume a domain D, which consists of two components:
D = { χ , P ( X ) } ,
where χ is a feature space, and there is a marginal distribution: P ( X ) , X = { x 1 , , x n } , x i χ .
Given a task T with two components,
T = { γ , P ( Y | X ) } = { γ , η } ; Y = { y 1 , , y n } , y i γ ,
where γ is a label space with a predictive function; η is trained from ( x i , y i ) , x i χ , y i γ , for each feature vector in the D domain; η predicts the corresponding label η ( x i ) = y i .
The paper [43] states self-called scenarios:
Given a source domain and an objective domain D S & D T , where D = { χ , P ( X ) } and the related tasks are T S & T T , where T = { γ , P ( Y | X ) } , the conditions can vary as follows:
  • χ S χ T : Feature spaces are different.
  • P ( X S ) P ( X T ) : The marginal probabilities of the distributions are different.
  • γ S γ T : The label spaces are different.
  • P ( Y S | X S ) P ( Y T | X T ) : The conditional probabilities are different.
Transfer learning is defined in [44]. Given a source domain D S with a corresponding source task T s and a target domain D T with a corresponding task T T , transfer learning is the process of enhancement for a target predictive function f T ( · ) using the related information from the domain source D S and the task-related T S .
CNN architectures are overlay filters that sample the data contained in an image. Therefore, these filters are hierarchical representations called feature maps when all filters learn all the features based on image data. These are connected to the last layer of the CNN architecture, which is a neural network classifier referred to in the literature as a fully connected layer [40].
Moreover, CNN architectures belong to a class of inductive learning algorithms, where the objective of these algorithms is to map input features between classes seeking the generalization of the data.
Therefore, inductive learning can be transferred from an architecture trained on the source task to the target class, and this is done by adjustments of the model space, correcting the inductive bias. Commonly, this can be performed by replacing the last layer of the model, which is the classifier, from the original one to a lightweight classifier, which should be trained on the generalized features.
In this study, we employed the transfer learning method on architectures that are pre-trained on a similar task. In this research, the Imagenet classification task [45] was employed, by the assumption that D S = D T and the CNN Architecture should perform as a generic feature extractor.

3.3.2. Feature Extractor

In our case, the CNN Architecture was used as a deep feature extractor, where features are extracted by the following:
Consider an image I ( x , y ) of the domain D T that is mapped or transformed by
W = { w 1 , , w n } ; w 1 R M × N × h ,
where W is the weight computed by the feature extractor, where M , N , and h are the proposed size of the CNN architecture. As a result, we can obtain
P = W ( I ( x , y ) ) = { w 1 ( I ( x , y ) ) , , w n ( I ( x , y ) ) } ; R M × N × h ,
and the pooling transformations are based on
Q = f ( 0 , P ) = { f ( 0 , w 1 ( I ( x , y ) ) ) , , f ( 0 , w n ( I ( x , y ) ) ) } ,
where f ( · ) is a mapping function.

3.3.3. Deep Learning Architectures

Below, we use the following architectures: VGG-16/VGG-19 [26], MobileNet [27], ResNet-50 [28], Inception v3 [29], Xception [30], and DenseNet-201 [31]. The selection of these architectures was based on the fact that they have been shown to obtain the Top-1 and Top-5 best accuracy and error rates on the Imagenet task proposed for [45], which is an image classification task.
In Table 1, the number of features extracted by each utilized architecture is explained.
The features for each selected architecture are concatenated with the extracted handcraft features, forming a unique feature vector:
F = D e e p [ # F e a t u r e s E x t r a c t e d ] H a n d [ A s y m e t r y , A r e a , , E n t r o p y , C o n t r a s t , S t d , M a x , M i n , ] ,
where the size of vector F is equal to 43 for the handcraft features, plus the number of features extracted from the CNN architecture used.

3.4. Algorithm Summary

Let the sum of all proposed and explained procedures be described in the form of an algorithm for extraction features for PSL Images. The proposed CAD system consists of four principal stages: (a) preprocessing, (b) handcraft features, (c) deep learning features, and (d) the fusion stage. In the first stage, artifacts are removed by using a Gaussian filter after a CIELab color transformation is employed. Mean thresholding per channel is employed afterwards. We then extract the ROI by using the boundary box algorithm. In the second stage, shape features are extracted using Equations (12)–(17), Equations (18)–(21) are computed for statistical color features, and texture features are then calculated from Equations (22)–(38). The ROI image is finally processed by the chosen CNN Architecture, whose features are concatenated for the following steps.
Algorithm 1 presents the details of the feature extraction process for PSL images.

3.5. Feature Selection

After extracting the deep learning and handcraft features, the number of features is reduced, which is a typical problem in machine learning algorithms, because high-dimensional data increases the computing time for a prediction.
Feature selection is one method to resolve this problem. In [46], filtering methods are applied, and the features are selected based on various statistical tests, such as χ 2 , ANOVA, Linear discriminant analysis (LDA) among others [47].
The extracted data can be represented in the form of a high-dimensional matrix defined by
X R n × p
where X is the extracted data, n represents the instances or elements, and p represents the features extracted for each element.
The idea is to reduce the data as much as possible, such that the features are extracted by selecting a subset of each element that is relevant to the pertinence category or label y. This subset is defined as
X S R n × k
where X S represents the reduced data, n is the same instance or element of the original data matrix, and k is the selected or reduced features based on k < < p .

Mutual Information

In this work, we propose the MI metric to reduce the data of the extracted features. MI is a measure based on the entropy measure:
I ( X ; Y ) = H ( X ) H ( X | Y ) = y Y x X P ( X Y ) l o g 2 P ( X Y ) P ( X ) P ( Y )
where X = x 1 , , x n and Y = y 1 , , y n in the multi-variable case, H ( X | Y ) is the conditional entropy between two random variables, and H ( X ) is the entropy of a random variable [48,49,50].
H ( x ) = i = 1 n P ( x i ) l o g 2 P ( x i )
Algorithm 1 Algorithm summary
Require: PSL Image
 
   (a)Preprocessing
1:
Input: I
Apply Gaussian Filter, Equation (2)
Apply RGB to CIEL*a*b* Color Transform
Separate CIEL*a*b* Image I L a b in Image Channels I L , I a , I b
Calculate Mean value of I L , I a , I b , Equations (3)–(5)
2:
for all ( i , y ) I L do
3:
    if I L ( x , y ) I L ¯ then
4:
        Assign 1 to I T h L ( x , y )
5:
    else
6:
        Assign 0 to I T h L ( x , y )
7:
    end if
8:
end for
9:
for all ( i , j ) I a do
10:
    if I a ( x , y ) I a ¯ then
11:
        Assign 1 to I T h a ( x , y )
12:
    else
13:
        Assign 0 to I T h a ( x , y )
14:
    end if
15:
end for
16:
for all ( i , j ) I b do
17:
    if I b ( x , y ) I b ¯ then
18:
        Assign 1 to I T h b ( x , y )
19:
    else
20:
        Assign 0 to I T h b ( x , y )
21:
    end if
22:
end for
Compute I b i n ( x , y ) Applying Equation (9) to I T h L , I T h a , I T h b
    Apply Median Filter, size = 5 × 5
Compute Bounding Box Algorithm to estimate coordinates of Region Of Interest
    Crop I r o i ( x , y ) from estimated coordinates on I ( x , y ) & I L a b ( x , y )
23:
Output: Region of Interest (ROI) Image I r o i ( x , y )
 
    (b)Handcraft Features
24:
Input: I R O I
    Compute Area, Perimeter, Circularity, Diameter and Eccentricity from Equations (13)–(17)
    Compute Asymetry from Equations (10)–(12)
    Compute Color Features from Equations (18)–(21) from I r o i ( x , y )
    Compute Texture Features from Equations (22)–(43) from I r o i ( x , y )
25:
Concatenate the extracted features H
26:
Output: H Handcraft features
 
    (c)Deep Learning features
27:
Input: I r o i
28:
     Load the weights W i from selected CNN architecture
29:
     Apply the weights W i to I r o i
30:
     Obtain the D deep learning features
31:
Output: D deep learning features
(D)Wrapping Features
32:
Input: D , H
33:
Apply H D to the extracted features
34:
Output: F Full set of extracted features
Ross, in [51], proposed an MI estimator for continuous and discrete data aiming at the relationship between datasets.
Based on the nearest neighborhood rule, the idea is to find a k-nearest neighbor between a point i among all the data points N x i using the Chebyshev distance metric:
D c h e b y s h e v = max i ( | x i y i | ) ,
and the MI measure is then computed as follows:
I ( X , Y ) = I i = ψ ( N ) ψ ( N x ) + ψ ( k ) ψ ( m ) ,
where ψ ( · ) is the digamma function, N x is the average of the data points, k is the number of k-closest neighbors to the point i, and m is the average counted neighbors among the full dataset.
If our dataset contains continuous data points, these are discretized using a binning method—grouping the data into bins—leading to a binned approximation of MI as follows:
I ( X , Y ) = log p ( x i , b i ) p ( x i ) p ( b i ) i .
After applying the MI method, a new vector is delivered, and it shows the value of MI per concrete feature among all obtained features. Next, the mean value is calculated for the MI among all features. We proposed to use this value as a threshold to discard the features with the lowest MI values and to keep the features with the highest MI values. Table 2 and Table 3 expose examples of different parts of the fused feature vector.
L f e a t u r e s ( x , y ) = F ( x , y ) > M . I ( x , y ) ¯
The found subset contains the features with the highest mutual information values that we consider as the fused data of both sets of extracted features from a PSL. Below, as an example, we present the behavior of the features and their MI values.
Table 2 and Table 3 expose, for illustration, several MI values between the extracted features for the dataset of images. Some of the deep learning features appear to demonstrate negligible MI values for the binary classification problem, according to the complete set of features extracted from the database, as one can see in Table 2.
Table 3 exposes several features with significant values of MI that are merged with additional significant handcraft features, forming the final set of features for the proposed system. Therefore, the proposed fusion method based on MI measurements demonstrates the fusion for both types of features in accordance with their influence on the binary classification problem.
Therefore, in contrast with state-of-the-art techniques that use Concat, PCA-based, and χ 2 -based methods, among others, for the selection of significant features, the proposed approach employs the information measures, justifying the informative weight of each feature that is used in the classification stage.

4. Results and Discussion

The classifiers: logistic regression, support vector machine with linear and rbf kernel [52], and finally, relevant vector machine [53,54] algorithms were employed in this work. The rationale for the use of different classifiers lies in the fundamental idea of transfer learning, whereby after extracting generic features, a shallow classifier must be applied to test the proposed method.

4.1. Experimental Results

4.1.1. Experimental Setup

The described method was performed on a PC with an Intel® Xeon E5 1230-V5 CPU, 32GB RAM, NVIDIA GeForce® 1080Ti with 11 GB RAM, running a Linux 64-bit operating system, Python 3.5, and the libraries: Keras 2.3 [55], Sklearn [56], Mahotas [57], thundersvm [58], and Imblearn [59].

4.1.2. Evaluation Metrics

In this study, we used commonly applied performance metrics: accuracy, sensitivity, specificity, precision, f-score, and the Matthews correlation coefficient:
A c c u r a c y = t p + t n t p + t n + f p + f n .
The accuracy value measures the appropriate classifications over the total elements evaluated.
S e n s i b i l i t y = t p t p + f n .
The sensibility value, also known as recall, measures the number of positive elements that are correctly classified.
S p e c i f i c i t y = t n t n + f p .
The specificity value measures the number of negative elements that are correctly classified.
P r e c i s i o n = t p t p + f p .
The precision value measures the number of elements that are correctly classified among all the positive elements to evaluate.
F S c o r e = 2 t p 2 t p + f p + f n .
The F-score value measures the harmonic mean between precision and recall.
These criteria are described in terms of tp, tn, fp, and fn, which denote true positive, true negative, false positive, and false negative, respectively. Additionally, to characterize the classifier performance, we used the Matthews correlation coefficient [60]:
M C C = ( t p × t n ) ( f p × f n ) ( t p + f p ) ( t p + f n ) ( t n + f p ) ( t n + f n ) 2 ,
where the MCC value measures the performance of the classification model as a coefficient between the predicted and the observed elements of the binary classification. It returns a value between [ 1 , 1 ] , where a value of 1 represents a perfect prediction, a value of 0 is no better than a random prediction, and 1 indicates total disagreement between prediction and observation.

4.1.3. Dataset

This study uses the public ISIC 2018: Skin Lesion Analysis Towards Melanoma Detection grand challenge datasets [21] task 3, also known as HAM10000 [61], which contains 10,015 separate images, as shown in Figure 6, where AKIEC corresponds to Actinic Keratosis, BCC is Basal Cell Carcinoma, DF is a Dermatofibroma, MEL is Melanoma, NV is Nevus, BKL is Pigmented Benign Keratosis, and VASC is Vascular. This distribution was obtained from the Ground Truth file, and each image is on RGB space with a size of 450 × 600 .
As one can see in Figure 6, the ISIC dataset contains images that belong to different types of skin lesions that do not correspond to the melanoma-type lesion. Therefore, we decided to modify the dataset to develop a binary class classification by excluding all the classes, except for Melanoma and Nevus.
We split the ISIC dataset: 75 % for the training set and 25 % for test set. The features extracted were processed by Z-score normalization:
Z = x μ σ .

4.1.4. Balance Data

The adjustment of the ISIC dataset mentioned above means that the data are unbalanced, where one class contains more data than others. In most machine learning techniques, when they employ unbalanced data, this can result in a lower performance of the minority class, which can cause misclassification of the data.
SMOTE [62] is a data augmentation method that can oversample the data of the minority class, compensating with the majority class. This method is based on K-NN clustering and Euclidean distance, selecting two points of the minority class and computing a new one based on them. This method is iterative until it reaches an equivalent amount of information of the majority class.
The SMOTE technique has been employed in several studies [63,64], where extracted features that belong to an unbalanced dataset are oversampled to compensate for the number of instances between classes. In this work, we apply this method to compensate the data of the melanoma class against nevus ones to the selected features by the MI criterion, and as result, a balanced dataset with the fused features can be obtained.
The study [65] introduces new metrics to overcome this problem, where geometric mean attempts to maximize the accuracy of each of the two classes similarly, a performance metric that correlates both objectives:
G m e a n = S e n s i t i v i t y · S p e c i f i c i t y 2 .
Dominance is aimed at quantifying the prevalence relation between the majority and minority classes and is used to analyze the behavior of a binary classifier:
D o m i n a n c e = S e n s i t i v i t y S p e c i f i c i t y .
The Index of Balanced Accuracy (IBA) is a performance metric in classification that aims to make it more sensitive for imbalanced domains. This metric is defined as follows:
I B A = ( 1 D o m i n a n c e ) · G m e a n 2 .
The objective of this procedure is to moderately favor the classification models with a higher prediction rate of the minority class, without underestimating the relevance of the majority class.

4.1.5. Experimental Results

In the following tables, the experimental results of binary classification for balance data are presented.
The experimental results in Table 4 show that the designed system appears to demonstrate sufficiently good performance when different CNN architectures are fused with handcraft features in accordance with the MI metric that can seek relevant information among features against concatenating or discriminative analysis.
Table 4 provides the experimental results obtained using the selected CNN architectures, where the Mobilenet v2 appears to demonstrate the best performance in comparison to the before-mentioned architectures. The proposed method shows notable evaluation metrics such as accuracy, Area Under Curve (AUC), and IBA metrics. The selected features contain the fused features that are the most relevant for the classification of a lesion with the MI metric.
Presented in Table 5, experimental results for different criteria show that the designed system outperforms several state-of-the-art methods. The experimentally justified performance, in our view, is due to the fusion technique employed, where a mutual information metric seeks relevant information among features upon concatenating or discriminative analysis. Moreover, the IBA metric is employed, achieving a value of 0.80, which confirms the stability and robustness of the system when balanced data are used.
The proposed method achieves an accuracy of 92.40 % , sensitivity of 86.41 % , AUC of 89.64 % , and an IBA of 0.80. In [24], the authors proposed the usage of the complete PSL image, which contains healthy skin and artefacts. Whereby, there is a probability of generating misclassification of the recognizing patterns that belong to these objects. By contrast, the proposed system extracts the proposed features from the region of interest of an image only for the entire classification process. This guarantees that the feature extraction is performed exactly in the lesion image.
Moreover, our novel CAD employed the ISIC 2018 database [21] that contains more than 10,000 dermoscopy images that are authenticated by experts. Additionally, because of the unbalanced data that are present in this database, we applied the data augmentation procedure given in Section 4.1.4. This guarantees the robustness of the obtained classifications results. In contrast, the CAD DermoDeep system [24] obtained the experimental results using a synthetic database constructed by four different databases (private and public). In this case, an equal number of melanoma and benign skin lesions were subjectively selected from each of the four databases. This system showed slightly improved performance results as those reported in this study. In our opinion, such an approach does not guarantee that the same high performance can be repeated using data that are not previously preselected.
The proposed system used a data augmentation technique and presented the performance analysis for all images contained in the database and not only those that have been preselected according to a subjective criterion that has no statistical justification.
Finally, our proposed system was developed with medical-based and deep learning features, whereby the system employed data from both sets of features and merged them, applying the MI criterion. As a result, the system enhances the recognition of melanoma and nevus lesions compared to the use of a fully deep learning approach that is extremely computationally expensive to train, requires substantial amounts of labeled data and does not recognize dermoscopic features established in the ABCD algorithm.

5. Conclusions and Future Work

In this study, a novel competitive CAD system was designed to differentiate melanoma from nevus lesions. Different from commonly proposed CAD systems, the novel method employs handcraft features based on the medical algorithm ABCD rule and deep learning features and applies a transfer learning method as a feature extractor. Additionally, in the proposed system, the set features are fused using an MI metric that, in contrast with state-of-the-art systems, can select the most significant features in accordance with their influence on binary classification decisions.
The performance of the proposed system has been evaluated; the system achieved an accuracy of 92.4 % , an IBA of 0.80 , and an MCC of 0.7953 using a balanced dataset. The system is competitive against the performance of other, state-of-the-art systems.
The proposed CAD system can help inexperienced physicians to visually distinguish the medical features to be applied. Furthermore, it could be used to provide a second opinion to a dermatologist. Our future work will consist of designing a method for multiclass classification using both sets of features, thus permitting the diagnosis of several diseases found in the ISIC challenge dataset.

Author Contributions

Methodology: J.-A.A.-D. and V.P.; software: J.-A.A.-D. and H.C.-F.; formal analysis: J.-A.A.-D., V.P. and S.S.; investigation: J.-A.A.-D. and V.P.; resources: J.-A.A.-D. and V.P.; data curation: J.-A.A.-D. and V.P.; writing—original draft preparation: J.-A.A.-D. and V.P.; writing—review and editing: J.-A.A.-D., V.P., S.S. and H.C.-F. All authors have read and agreed to the published version of the manuscript.

Acknowledgments

The authors would like to thank the Instituto Politecnico Nacional (Mexico) and the Consejo Nacional de Ciencia y Tecnologia (Mexico) for their support in this work.

Conflicts of Interest

The authors declare that there is no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ELMEpiluminance Microscopy
MDMedical Doctor
PSLPigmented Skin Lesion
ROIRegion of Interest
GLCMGray Co-occurrence Matrix
DLDeep Learning
CADComputer-Aided Detection
SVMSupport Vector Machine
RVMRelevant Vector Machine
LRLogistic Regression
CNNConvolutional Neural Network
TFTransfer Learning
MIMutual Information
K-NNK-Nearest Neighborhood
MCCMatthews Correlation Coefficient

References

  1. Skin Cancers. Available online: http://www.who.int/uv/faq/skincancer/en/index1.html (accessed on 15 January 2020).
  2. Skin Cancer. Available online: https://www.wcrf.org/dietandcancer/skin-cancer (accessed on 15 January 2020).
  3. Baldi, A.; Quartulli, M.; Murace, R.; Dragonetti, E.; Manganaro, M.; Guerra, O.; Bizzi, S. Automated Dermoscopy Image Analysis of Pigmented Skin Lesions. Cancers 2010, 2, 262–273. [Google Scholar] [CrossRef]
  4. Almaraz-Damian, J.A.; Ponomaryov, V.; Rendon-Gonzalez, E. Melanoma CADe based on ABCD Rule and Haralick Texture Features. In Proceedings of the 2016 9th International Kharkiv Symposium on Physics and Engineering of Microwaves, Millimeter and Submillimeter Waves (MSMW), Kharkiv, Ukraine, 20–24 June 2016; pp. 1–4. [Google Scholar]
  5. Li, Y.; Shen, L. Skin Lesion Analysis towards Melanoma Detection Using Deep Learning Network. Sensors 2018, 18, 556. [Google Scholar] [CrossRef] [Green Version]
  6. Lopez, A.R.; Giro-i-Nieto, X.; Burdick, J.; Marques, O. Skin lesion classification from dermoscopic images using deep learning techniques. In Proceedings of the 2017 13th IASTED International Conference on Biomedical Engineering (BioMed), Innsbruck, Austria, 20–21 February 2017; pp. 49–54. [Google Scholar]
  7. Hosny, K.M.; Kassem, M.A.; Foaud, M.M. Classification of skin lesions using transfer learning and augmentation with Alex-net. PLoS ONE 2019, 14, e0217293. [Google Scholar] [CrossRef] [Green Version]
  8. Castillejos, H.; Ponomaryov, V.; Nino-De-Rivera, L.; Golikov, V. Wavelet Transform Fuzzy Algorithms for Dermoscopic Image Segmentation. Comput. Math. Method. Med. 2012, 2012, 578721. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Nachbar, F.; Stolz, W.; Merkle, T.; Cognetta, A.B.; Vogt, T.; Landthaler, M.; Bilek, P.; Braun-Falco, O.; Plewig, G. The ABCD Rule of Dermatoscopy. J. Am. Acad. Dermatol. 1994, 30, 551–559. [Google Scholar] [CrossRef] [Green Version]
  10. Zalaudek, I.; Argenziano, G.; Soyer, H.P.; Corona, R.; Sera, F.; Blum, A.; Braun, R.P.; Cabo, H.; Ferrara, G.; Kopf, A.W.; et al. Three-point checklist of dermoscopy: An open internet study. Br. J. Dermatol. 2005, 154, 431–437. [Google Scholar] [CrossRef] [PubMed]
  11. Henning, J.S.; Dusza, S.W.; Wang, S.Q.; Marghoob, A.A.; Rabinovitz, H.S.; Polsky, D.; Kopf, A.W. The CASH (color, archi-tecture, symmetry, and homogeneity) algorithm for dermoscopy. J. Am. Acad. Dermatol. 2007, 56, 45–52. [Google Scholar] [CrossRef] [PubMed]
  12. Stolz, W.; Riemann, A.; Cognetta, A.B.; Pillet, L.; Abmayr, W.; Hölzel, D.; Bilek, P.; Nachbar, F.; Landthaler, M.; Braun-Falco, O. ABCD rule of dermatoscopy: A new practical method for early recognition of malignant melanoma. Eur. J. Dermatol. 1994, 4, 521–527. [Google Scholar]
  13. Argenziano, G.; Fabbrocini, G.; Carli, P.; De Giorgi, V.; Sammarco, E.; Delfino, M. Epiluminescence Microscopy for the Diagnosis of Doubtful Melanocytic Skin Lesions: Comparison of the ABCD Rule of Dermatoscopy and a New 7-Point Checklist Based on Pattern Analysis. Arch. Dermatol. 1998, 134, 1563–1570. [Google Scholar] [CrossRef] [Green Version]
  14. Melanoma Education Foundation. Finding Melanoma Early: Warning Signs & Photos. Available online: https://www.skincheck.org/Page4.php (accessed on 2 February 2020).
  15. MoleMap NZ Official Site. The EFG of Nodular Melanomas. Available online: https://www.molemap.co.nz/knowledge-centre/efg-nodular-melanomas (accessed on 2 February 2020).
  16. Jensen, J.D.; Elewski, B.E. The ABCDEF Rule: Combining the “ABCDE Rule” and the “Ugly Duckling Sign” in an Effort to Improve Patient Self-Screening Examinations. J. Clin. Aesthet. Dermatol. 2015, 8, 15. [Google Scholar]
  17. Kalkhoran, S.; Milne, O.; Zalaudek, I.; Puig, S.; Malvehy, J.; Kelly, J.W.; Marghoob, A.A. Historical, Clinical, and Dermoscopic Characteristics of Thin Nodular Melanoma. Arch. Dermatol. 2010, 146, 311–318. [Google Scholar] [CrossRef] [Green Version]
  18. Adjed, F.; Gardezi, S.J.S.; Ababsa, F.; Faye, I.; Dass, S.C. Fusion of structural and textural features for melanoma recognition. IET Comput. Vis. 2018, 12, 185–195. [Google Scholar] [CrossRef]
  19. Mendonça, T.; Ferreira, P.M.; Marques, J.; Marcal, A.R.S.; Rozeira, J. PH2—A dermoscopic image database for research and benchmarking. In Proceedings of the 35th International Conference of the IEEE Engineering in Medicine and Biology Society, Osaka, Japan, 3–7 July 2013. [Google Scholar]
  20. Hagerty, J.R.; Stanley, R.J.; Almubarak, H.A.; Lama, N.; Kasmi, R.; Guo, P.; Stoecker, W.V. Deep Learning and Handcrafted Method Fusion: Higher Diagnostic Accuracy for Melanoma Dermoscopy Images. IEEE J. Biomed. Health Inform. 2019, 23, 1385–1391. [Google Scholar] [CrossRef]
  21. Codella, N.; Rotemberg, V.; Tschandl, P.; Celebi, M.E.; Dusza, S.; Gutman, D.; Helba, B.; Kalloo, A.; Liopyris, K.; Marchetti, M.; et al. Skin Lesion Analysis Toward Melanoma Detection 2018: A Challenge Hosted by the International Skin Imaging Collaboration (ISIC). arXiv 2018, arXiv:1902.03368. [Google Scholar]
  22. Li, X.; Wu, J.; Jiang, H.; Chen, E.Z.; Dong, X.; Rong, R. Skin Lesion Classification Via Combining Deep Learning Features and Clinical Criteria Representations. bioRxiv 2018. bioRxiv:382010. [Google Scholar]
  23. Ke, G.; Meng, Q.; Finley, T.; Wang, T.; Chen, W.; Ma, W.; Ye, Y.; Liu, T.-Y. LightGBM: A Highly Efficient Gradient Boosting Decision Tree. In Proceedings of the Advances in Neural Information Processing Systems 30 (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017; pp. 3149–3157. [Google Scholar]
  24. Abbas, Q.; Celebi, M.E. DermoDeep-A classification of melanoma-nevus skin lesions using multi-feature fusion of visual features and deep neural network. Multimed. Tool. Appl. 2019, 78, 23559–23580. [Google Scholar] [CrossRef]
  25. Krizhevsky, A. Learning Multiple Layers of Features from Tiny Images; Technical Report TR-2009; University of Toronto: Toronto, ON, Canada, 2009. [Google Scholar]
  26. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  27. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
  28. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  29. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 2818–2826. [Google Scholar]
  30. Chollet, F. Xception: Deep Learning with Depthwise Separable Convolutions. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2016; pp. 1800–1807. [Google Scholar]
  31. Huang, G.; Liu, Z.; Weinberger, K.Q. Densely Connected Convolutional Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2016; pp. 2261–2269. [Google Scholar]
  32. Orea-Flores, I.Y.; Gallegos-Funes, F.J.; Arellano-Reynoso, A. Local Complexity Estimation Based Filtering Method in Wavelet Domain for Magnetic Resonance Imaging Denoising. Entropy 2019, 21, 401. [Google Scholar] [CrossRef] [Green Version]
  33. Goritskiy, Y.; Kazakov, V.; Shevchenko, O.; Mendoza, F. Model of Random Field with Piece-Constant Values and Sampling-Restoration Algorithm of Its Realizations. Entropy 2019, 21, 792. [Google Scholar] [CrossRef] [Green Version]
  34. Yang, M.; Kpalma, K.; Joseph, R. A Survey of Shape Feature Extraction Techniques. In Pattern Recognition Techniques, Technology and Applications; Yin, P.-Y., Ed.; InTech: London, UK, 2008; Available online: http://www.intechopen.com/books/pattern_recognition_techniques_technology_and_applications/a_survey_of_shape_feature_extraction_techniques (accessed on 10 January 2020). [CrossRef] [Green Version]
  35. Alceu, B.; Fornaciali, M.; Valle, E.; Avila, S. (De) Constructing Bias on Skin Lesion Datasets. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Long Beach, CA, USA, 16–20 June 2019; pp. 2766–2774. [Google Scholar]
  36. Sirakov, N.M.; Mete, M.; Chakrader, N.S. Automatic boundary detection and symmetry calculation in dermoscopy images of skin lesions, Department of Mathematics, 2 Department of Computer Science. IEEE Intern. Conf. Image Process. 2011, 1, 1637–1640. [Google Scholar]
  37. Haralick, R.; Shanmugam, K.; Dinstein, I. Textural Features for Image Classification. IEEE Trans. Syst. Man Cybern. 1973, 3, 610–621. [Google Scholar] [CrossRef] [Green Version]
  38. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef] [Green Version]
  39. Voulodimos, A.; Doulamis, N.; Doulamis, A.; Protopapadakis, E. Deep Learning for Computer Vision: A Brief Review. Comput. Intell. Neurosci. 2018, 2018, 7068349. [Google Scholar] [CrossRef] [PubMed]
  40. Chollet, F. Deep Learning with Python, 1st ed.; Manning Publications Co.: Greenwich, CT, USA, 2017. [Google Scholar]
  41. Yosinski, J.; Clune, J.; Bengio, Y.; Lipson, H. How transferable are features in deep neural networks? In Proceedings of the 27th International Conference on Neural Information Processing Systems—Volume 2; MIT Press: Cambridge, MA, USA; pp. 3320–3328.
  42. Rawat, W.; Wang, Z. Deep convolutional neural networks for image classification: A comprehensive review. Neural Comput. 2017, 29, 2352–2449. [Google Scholar] [CrossRef] [PubMed]
  43. Pan, S.J.; Yang, Q. A Survey on Transfer Learning. IEEE Trans. Knowl. Data Eng. 2010, 22, 1345–1359. [Google Scholar] [CrossRef]
  44. Weiss, K.; Khoshgoftaar, T.M.; Wang, D. A survey of transfer learning. J. Big Data 2016, 3. [Google Scholar] [CrossRef] [Green Version]
  45. Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. ImageNet Large Scale Visual Recognition Challenge. Int. J. Comput. Vis. 2015. [Google Scholar] [CrossRef] [Green Version]
  46. Bommert, A.; Sun, X.; Bischl, B.; Rahnenführer, J.; Lang, M. Benchmark for Filter Methods for Feature Selection in High-Dimensional Classification Data. Comput. Stat. Data Anal. 2020, 143, 106839. [Google Scholar] [CrossRef]
  47. Karczmarek, P.; Pedrycz, W.; Kiersztyn, A.; Rutka, P. A study in facial features saliency in face recognition: An analytic hierarchy process approach. Soft Comput. 2017, 21, 7503–7517. [Google Scholar] [CrossRef] [Green Version]
  48. Kozachenko, L.F.; Leonenko, N.N. Sample Estimate of the Entropy of a Random Vector. Probl. Peredachi Inf. 1987, 23, 9–16. [Google Scholar]
  49. Kraskov, A.; Stögbauer, H.; Grassberger, P. Estimating mutual information. Phys. Rev. E 2004, 69, 066138. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  50. Houghton, C. Calculating mutual information for spike trains and other data with distances but no coordinates. R. Soc. Open Sci. 2015, 2. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  51. Ross, B.C. Mutual information between discrete and continuous data sets. PLoS ONE 2014, 9. [Google Scholar] [CrossRef]
  52. Vapnik, V. Statistical Learning Theory; John Wiley: New York, NY, USA, 1998. [Google Scholar]
  53. Tipping, M.E. Sparse bayesian learning and the relevance vector machine. J. Mach. Learn. Res. 2001, 1, 211–244. [Google Scholar]
  54. Bishop, C. Probabilistic graphical models and their role in machine learning. In Proceedings of the NATO ASI–LTP 2002 Tutorial, Leuven, Belgium, 8–19 July 2002. [Google Scholar]
  55. Chollet, F. Keras. 2015. Available online: https://keras.io (accessed on 15 January 2020).
  56. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  57. Coelho, L.P. Mahotas: Open source software for scriptable computer vision. J. Open Res. Softw. 2013, 1, e3. [Google Scholar] [CrossRef]
  58. Wen, Z.; Shi, J.; Li, Q.; He, B.; Chen, J. ThunderSVM: A fast SVM library on GPUs and CPUs. J. Mach. Learn. Res. 2018, 19, 797–801. [Google Scholar]
  59. Lemaître, G.; Nogueira, F.; Aridas, C.K. Imbalanced-learn: A python toolbox to tackle the curse of imbalanced datasets in machine learning. J. Mach. Learn. Res. 2017, 18, 559–563. [Google Scholar]
  60. Powers, D.M. Evaluation: From precision, recall and F-measure to ROC, informedness, markedness and correlation. J. Mach. Learn. Technol. 2011. [Google Scholar] [CrossRef]
  61. Tschandl, P.; Rosendahl, C.; Kittler, H. The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Sci Data 2018, 5, 180161. [Google Scholar] [CrossRef] [PubMed]
  62. Chawla, N.V.; Bowyer, K.W.; Hall, L.O.; Kegelmeyer, W.P. SMOTE: Synthetic minority over-sampling technique. J. Artif. Intell. Res. 2002, 16, 321–357. [Google Scholar] [CrossRef]
  63. Celebi, M.E.; Kingravi, H.A.; Uddin, B.; Iyatomi, H.; Aslandogan, Y.A.; Stoecker, W.V.; Moss, R.H. A methodological approach to the classification of dermoscopy images. Comput. Med. Imag. Graph. 2007, 31, 362–373. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  64. Capdehourat, G.; Corez, A.; Bazzano, A.; Alonso, R.; Musé, P. Toward a combined tool to assist dermatologists in melanoma detection from dermoscopic images of pigmented skin lesions. Pattern Recognit. Lett. 2011, 32, 2187–2196. [Google Scholar] [CrossRef]
  65. García, V.; Mollineda, R.A.; Sánchez, J.S. Index of Balanced Accuracy: A Performance Measure for Skewed Class Distributions; Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin/Heidelberg, Germany, 2009; Volume 5524, pp. 441–448. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Block diagram of the novel Computer-Aided Detection (CAD) system.
Figure 1. Block diagram of the novel Computer-Aided Detection (CAD) system.
Entropy 22 00484 g001
Figure 2. (a) Original image I ( x , y ) ; (b) image (a) processed with a Gaussian filter.
Figure 2. (a) Original image I ( x , y ) ; (b) image (a) processed with a Gaussian filter.
Entropy 22 00484 g002
Figure 3. (a) Original image I ( x , y ) , (b) I ( x , y ) on channel L, (c) I ( x , y ) on channel a*, and (d) I ( x , y ) on channel b*.
Figure 3. (a) Original image I ( x , y ) , (b) I ( x , y ) on channel L, (c) I ( x , y ) on channel a*, and (d) I ( x , y ) on channel b*.
Entropy 22 00484 g003
Figure 4. Results of the thresholding stage: (a) Original image I ( x , y ) , (b) binary image I T h L obtained from the threshold of the L channel, (c) binary image I T h a obtained from the threshold of the a* channel, (d) binary image I T h b obtained from the threshold of the b* channel.
Figure 4. Results of the thresholding stage: (a) Original image I ( x , y ) , (b) binary image I T h L obtained from the threshold of the L channel, (c) binary image I T h a obtained from the threshold of the a* channel, (d) binary image I T h b obtained from the threshold of the b* channel.
Entropy 22 00484 g004
Figure 5. Results of the preprocessing stage: Original image I ( x , y ) , (a) Region of Interest (ROI) obtained, (b) segmented image S I b i n , (c) obtained asymmetry at 0°, and (d) obtained asymmetry at 90°.
Figure 5. Results of the preprocessing stage: Original image I ( x , y ) , (a) Region of Interest (ROI) obtained, (b) segmented image S I b i n , (c) obtained asymmetry at 0°, and (d) obtained asymmetry at 90°.
Entropy 22 00484 g005
Figure 6. Distribution of classes on the ISIC2018/HAM10000 dataset.
Figure 6. Distribution of classes on the ISIC2018/HAM10000 dataset.
Entropy 22 00484 g006
Table 1. The number of features extracted by architecture.
Table 1. The number of features extracted by architecture.
CNN Architecture # Features
VGG194096
VGG164096
ResNET-502048
Inception v32048
Mobilenet v11024
Mobilenet v21280
DenseNET-2011920
Xception2048
Table 2. Example of features that expose the lowest Mutual Information (MI) values.
Table 2. Example of features that expose the lowest Mutual Information (MI) values.
Inception v3 + Handcraft Features
FeatureMutual Info. value
14483.813956037657107 × 10 7
6481.4314941993776031 × 10 5
10203.4236070477255964 × 10 5
8043.515106075036023 × 10 5
3334.213506368255793 × 10 5
5624.4971751217204314 × 10 5
915.133156269931938 × 10 5
8526.514623080855486 × 10 5
16897.53828133426282 × 10 5
17887.605629690621285 × 10 5
Table 3. Example of features that expose the highest MI values.
Table 3. Example of features that expose the highest MI values.
Inception v3 + Handcraft Features
FeatureMutual Info. value
Mean_b0.09829037284938669
Min_b0.09536305749234275
Max_b0.06834317593527395
5780.06131147578510121
Min_G0.05817685924318594
1160.055293703553799256
3890.05446628875169646
4640.05424503079140042
Var_L0.05420063226575533
2880.053949117014718606
Table 4. Performance results of the proposed method using selected deep learning architectures fused with handcraft features.
Table 4. Performance results of the proposed method using selected deep learning architectures fused with handcraft features.
Acc. TrainAcc. TestSensibilitySpecificityPrecisionF-ScoreAUCG-MeanIBAMCC
VGG1688.6084.9079.230.8588.7483.7184.790.850.720.7012
VGG1990.2387.1482.460.8790.4486.2687.050.870.760.7451
Mobilenet v191.4889.3284.040.8993.4988.5189.210.890.790.7898
Mobilenet v292.4089.7186.410.9092.0889.1689.640.900.800.7953
ResNET-5090.6787.8681.240.8893.0986.7687.720.870.770.7624
DenseNET-20191.1088.5483.250.8892.6187.6888.440.880.780.5985
Inception V391.3388.1084.870.8890.5987.4288.020.880.770.7632
Xception90.4787.5383.190.8790.5886.7387.440.870.760.7525
Table 5. Comparison between our novel CAD and state-of-the-art CADs.
Table 5. Comparison between our novel CAD and state-of-the-art CADs.
Metric[18][24][22]Proposed Method Using
Mobilenet v2 Architecture
Accuracy86.079585.5592.40
Sensibility78.93938686.41
Specificity93.25--90
Precision-938592.08
F-Score--8689.16
G-Mean---0.90
IBA---0.80
MCC---0.7953
Imbalance DataYesNoNoNo
Fused DataYesYesYesYes
Type of ClassificationBinaryBinaryMulticlassBinary

Share and Cite

MDPI and ACS Style

Almaraz-Damian, J.-A.; Ponomaryov, V.; Sadovnychiy, S.; Castillejos-Fernandez, H. Melanoma and Nevus Skin Lesion Classification Using Handcraft and Deep Learning Feature Fusion via Mutual Information Measures. Entropy 2020, 22, 484. https://doi.org/10.3390/e22040484

AMA Style

Almaraz-Damian J-A, Ponomaryov V, Sadovnychiy S, Castillejos-Fernandez H. Melanoma and Nevus Skin Lesion Classification Using Handcraft and Deep Learning Feature Fusion via Mutual Information Measures. Entropy. 2020; 22(4):484. https://doi.org/10.3390/e22040484

Chicago/Turabian Style

Almaraz-Damian, Jose-Agustin, Volodymyr Ponomaryov, Sergiy Sadovnychiy, and Heydy Castillejos-Fernandez. 2020. "Melanoma and Nevus Skin Lesion Classification Using Handcraft and Deep Learning Feature Fusion via Mutual Information Measures" Entropy 22, no. 4: 484. https://doi.org/10.3390/e22040484

APA Style

Almaraz-Damian, J. -A., Ponomaryov, V., Sadovnychiy, S., & Castillejos-Fernandez, H. (2020). Melanoma and Nevus Skin Lesion Classification Using Handcraft and Deep Learning Feature Fusion via Mutual Information Measures. Entropy, 22(4), 484. https://doi.org/10.3390/e22040484

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop