Abstract
Sickle cell disease (SCD), a blood disorder that transforms the shape of red blood cells into a distinctive sickle form, is a major concern as it not only compromises the blood’s oxygen-carrying capacity but also poses significant health risks, ranging from weakness to paralysis and, in severe cases, even fatality. This condition not only underscores the pressing need for innovative solutions but also encapsulates the broader challenges faced by medical professionals, including delayed treatment, protracted processes, and the potential for subjective errors in diagnosis and classification. Consequently, the application of artificial intelligence (AI) in healthcare has emerged as a transformative force, inspiring multidisciplinary efforts to overcome the complexities associated with SCD and enhance diagnostic accuracy and treatment outcomes. The use of transfer learning helps to extract features from the input dataset and give an accurate prediction. We analyse and compare the performance parameters of three distinct models for this purpose: GoogLeNet, ResNet18, and ResNet50. The best results were shown by the ResNet50 model, with an accuracy of 94.90%. Explainable AI is the best approach for transparency and confirmation of the predictions made by the classifiers. This research utilizes Grad-CAM to interpret and make the models more reliable. Therefore, this specific approach benefits pathologists through its speed, precision, and accuracy of classification of sickle cells.
1 Introduction
Sickle cell disease (SCD) is an inherited blood disorder in which an abnormality in the haemoglobin causes the red blood cells to become rigid and “sickle-shaped” rather than their typical round bi-concave shape [1,2,3]. Due to a genetic mutation, a single amino acid substitution at chromosome 11 occurs on the HBB gene, which encodes the beta-globin subunit of haemoglobin, a pivotal protein tasked with transporting oxygen within red blood cells. Amino acid valine replaces glutamic acid, forming Haemoglobin S or Hb S, the abnormal haemoglobin associated with SCD. The mutated Hb S polymerizes to form bundles leading to distortion of the red blood cells or erythrocyte sickling [4]. These sickle-shaped cells can cause blockages in blood vessels, leading to several health problems, including long-term pain, anaemia, swelling of the extremities, increased susceptibility to bacterial infections, and stroke [5,6,7,8]. Vaso-occlusion is a condition where the blood vessels are clogged due to the sickle cells, restricting the blood flow further. Figure 1(a) shows the appearance of sickle cells along with RBCs in blood while Figure 1(b) shows a condition called vaso-occlusion occurring during SCD.
SCD is an inherited genetic disorder that in inherited in an autosomal recessive manner. This means for a child to be impacted by the condition, both parents should carry and pass on the mutant haemoglobin gene. Figure 2 shows the inheritance pattern of the SCD.
Nearly two-thirds of infants with HbSS (Haemoglobin S Homozygous), which indicates inheriting two copies of the sickle haemoglobin gene (HbS) from both parents, Are born in Nigeria, the Republic of Congo, or India [9]. Therefore, the childhood mortality rate is high in these areas. For African Americans, it affects every 1 in 365 people. In the United States, 90,000–100,000 people have SCD [10]. In the year 2010, there were an estimated 305,800 newborns worldwide diagnosed with homozygous SCD. Predictions suggest that by the year 2050, this number is expected to increase to approximately 400,000 [9,10,11,12]. Current treatments include medications to alleviate symptoms and blood transfusions. Stem cell transplant is offered as an option to young children and adolescents. Gene therapies and gene editing technologies are being researched [4,11–15].
1.1 Problems in identifying sickle cells and artificial intelligence (AI) as an aid
Being such a complex disorder, this is not easy to diagnose due to several reasons [16,17]. Some of the reasons are:
Shape and size variability: Sickle cells can take on various shapes and sizes, making them difficult to differentiate from other red blood cells. The orientations also need to be considered while analysing.
Low concentration: Sickle cells are found in low numbers in the blood, making detection challenging.
Morphological variations: The form and appearance of sickle cells can alter over time, making identification complicated.
Human error: Because sickle cell identification is frequently done manually by experienced professionals, it is vulnerable to human error and unpredictability.
Lack of automation: Current methods for identifying sickle cells are time-consuming and labour-intensive, and automated solutions are needed to improve efficiency and accuracy.
In the field of healthcare, the use of AI had significantly enhanced the pace of progress. It has helped doctors/pathologists as an aid to diagnosis, decide treatment plans based on the patient data. It has also covered all the possible domains such as health management systems [18], telemetry and telemedicine [19,20], drug delivery data analysis, and precision medicine [21,22]. AI played a major role when human intervention became difficult.
When there are images that play a major role in diagnosis, deep learning is commonly used. A significant benefit of utilizing deep learning models is their capacity to automatically extract intricate patterns and high-level representations from data, often eliminating the need for manual feature engineering. It learns features, trains itself, and provides an output. Explainable AI techniques such as Grad-CAM and LIME can be used to explain the predictions made by the deep learning models. This allows humans to understand and interpret some conclusions made by the algorithm. It is essential in AI models as it characterizes model’s accuracy, fairness, transparency and outcomes. A few studies have used deep learning for SCD [23,24].
An unmet need In the diagnosis of sickle cells is the automated identification of sickle cells using deep learning and AI [25–28]. A study by de Haan et al. was on automated screening of sickle cells using a smartphone-based microscope. The framework used two different deep neural networks. These two networks have been designed to do specific tasks. One of them helped to standardize the image quality that has been captured from the experimented microscope to the laboratory-grade benchtop microscope. The other one took the output of the first network as its input and then performed operations for semantic segmentation between healthy and sickle cells within a blood smear. This method proved to be a more accurate and cost effective [29]. Using Hough Transform and some morphological tool can also enhance the deep learning models [30]. Alzubaidi et al. (2020) conducted a study on deep learning models for classifying red blood cells into circular, elongated, and other categories using lightweight deep learning models. To simplify the implementation of deep learning models and enhance their performance, transfer learning was employed. But it was important to note that transfer learning may not have a substantial impact on performance in medical image tasks when the source domain is entirely dissimilar from the target domain [26]. Following a similar approach, categorization into three datasets can be done for better results, one intended for domain transfer learning, the second to enhance robustness, and the third dataset for testing purposes. On following this approach, the accuracy obtained was 98.87% for the collected dataset [31]. Transfer learning with ResNet50 and DenseNet121 showed noticeable results regarding medical data classification [32,33]. Specifically, for sickle cells, an accuracy of 93.88% was found with ResNet50 [34]. Apart from sickle cell, deep learning has been widely applied in various medical tasks, showcasing its versatility and effectiveness in the healthcare domain such as detecting COVID-19 from chest images [35], automatic prediction and grading of retinopathy from retinal fundus images [36], wherein the models can not only predict the presence of retinopathy but also grade its severity, assisting healthcare professionals in timely intervention and management using three novel methods, and detecting glaucoma from fundus images, facilitating early intervention and preventing vision loss.
With different studies into consideration, major research gaps were found in sickle cell classification. There remain high chances of overlapping of cells or the three-dimensional geometry of a cell which may be overlooked, i.e. assumed to be normal from a different angle. Second, the generality of the model is restricted due to small and specific custom data. This resulted in reduction in accuracy when tested on new datasets. To tackle these issues, this research has been conducted. Explainable Artificial Intelligence (XAI) are being extensively used for demonstrating the trustworthiness of the models. However, very few studies have used XAI for sickle cell detection. Our main contributions are:
Utilizing transfer learning for detection of sickle cell from the microscopic images of peripheral blood smear.
Presenting comparative results and performance of three different models such as GoogLeNet, ResNet-18, and ResNet-50 for the detection of sickle cell samples.
Further, XAI is used for demystification of the deep learning models. Grad-CAM has been used to make the predictions transparent and understandable.
The rest of the article is as follows: Section 2 discusses the methodology. Section 3 presents the results. Various discussions are made in Section 4. Section 5 includes a brief conclusion.
2 Materials and methods
The entire workflow of this study is described in Figure 3.
2.1 Dataset
Data have to be gathered before analysing or extracting any features from it. There are two different ways to proceed with this task. Data can be collected from medical facilities after obtaining prior ethical clearance. The second way is to use public datasets. Here we use a publically available open-source dataset from the University College of London. Data have been gathered to assess the effectiveness of automated image analysis algorithms in identifying sickle cells in blood samples from various digital images [37]. 1,985 images were captured that were semi supervised at 100× magnification with an objective lens of 1.4 NA, a camera capable of capturing coloured images and an X-Y stage that was motorized and employed for precise sample positioning. Of the 1,985 images, 740 had labels with sickle cells, while 1,134 were non-sickle cells. 111 images had no labels. Figure 4 shows both sickle and non-sickle images from the dataset used.
2.2 Dataset preparation
A dataset must be modified when obtained from an open-source repository. The raw dataset had folders with patient IDs. The downloaded file also included a text file which had the folder names and its corresponding binary value for true and false class. To segregate this faster, a python loop was incorporated which matched all the file names and then moved it to the desired location. The dataset was weakly supervised and a manual inspection/segregation was required. This reduced the final dataset to 1,664 images. This also aided in achieving a balance between the two classes. The size of the images in the dataset also needed adjustment. The images’ sizes were decreased by converting them from tiff to jpeg format to lessen the system’s processing requirements. This reduced the file size tremendously from 15 Megabytes to 550 kilobytes.
2.3 Network architecture
GoogLeNet, ResNet-18, and ResNet-50 are all popular convolutional neural network (CNN) architectures used for various computer vision tasks such as image classification. There have been several studies that have used these models in the field of digital pathology. While they share the common goal of extracting meaningful features from images, they differ in their architectural designs, depth, and complexity [38].
GoogLeNet introduced the concept of the Inception module, which employs multiple filters of different sizes within a single layer to capture diverse information. It aims to balance depth and computational efficiency by reducing the number of parameters. GoogLeNet’s efficiency and ability to handle variations in object sizes within images make it a valuable choice for tasks like medical image classification. The network contains nine Inception modules stacked with occasional max-pooling layers on top of each other for down sampling. This makes up a total of 22 layers which includes convolutional layers, max-pooling layers, and fully connected layers. The final layers consist of global average pooling, a fully connected layer, and the softmax output layer. Figure 5 shows the GoogLeNet architecture.
Residual network (ResNet), is based on the concept of residual learning, which introduces skip connections or shortcuts to allow information to flow directly across layers [39]. This model is chosen for its relatively shallow architecture (18 layers). It strikes a balance between model complexity and computational efficiency, making it suitable for tasks where a lighter model is preferred, such as when dealing with limited computational resources or smaller datasets. The network consists of four residual blocks, each containing multiple convolutional layers with shortcut connections. ResNet-18 has 18 layers, including convolutional layers, batch normalization layers, max-pooling layers, and fully connected layers. The initial convolutional layer has a larger filter size (7 × 7) with stride 2 for down sampling, followed by max-pooling. The final layers include global average pooling, a fully connected layer, and the softmax output layer. Figure 6 shows the ResNet18 architecture.
ResNet-50 is a deeper version of ResNet-18, with 50 layers in total. It follows the same residual learning principles as ResNet-18 but utilizes deeper residual blocks with more layers. This increased depth allows the model to capture more intricate features and patterns in the data, potentially improving its performance on more complex sickle cell classification tasks or datasets with greater variability. The network consists of five stages, each containing multiple residual blocks. The initial convolutional layer uses a smaller filter size (7 × 7) with stride 2 for down sampling, followed by max-pooling. The final layers are similar to ResNet-18, including global average pooling, a fully connected layer, and the softmax output layer. Figure 7 shows ResNet50 architecture [40].
GoogLeNet introduced the idea of Inception modules to capture diverse features, while ResNet-18 and ResNet-50 introduced skip connections to enable deeper networks. ResNet-18 has 18 layers and four residual blocks, while ResNet-50 has 50 layers and five stages of residual blocks. ResNet-50 is deeper and more complex compared to both GoogLeNet and ResNet-18. The rationale behind choosing ResNet18, ResNet50, and GoogLeNet involves considering their architectural characteristics, transfer learning capabilities, community adoption and benchmark performance, interpretability features, availability in deep learning frameworks, and an empirical exploration approach. All three models (ResNet18, ResNet50, and GoogLeNet) have been pre-trained on large-scale datasets like ImageNet. ResNet architectures and GoogLeNet are widely adopted and have demonstrated strong performance in various image classification tasks. The community’s extensive use and benchmarking of these models provide confidence in their effectiveness. These contribute to the interpretability of these models. These factors collectively contribute to making informed choices for the sickle cell classification task.
2.4 Experimentation and model training
We acquired a diverse and representative dataset of sickle cell images from open source ensuring the dataset includes a sufficient number of samples with various conditions, resolutions, and lighting conditions. A microscope with 100× magnification, motorized XY stage and 1.4 NA objective lens has been used. Further, the dataset is to be split into 70, 20, and 10 ratio for training, validation, and testing, respectively. This study used transfer learning to exploit the knowledge obtained from a pre-trained CNN to classify sickle cells. Transfer learning is a powerful technique widely used in the field of deep learning. It involves leveraging knowledge gained from a pre-trained model, typically on a large dataset, to enhance the performance of a model on a different but related task. The aim of this study is to classify sickle cells using transfer learning with a pre-trained CNN. Matlab’s deep learning and machine learning package provided the computational framework for executing the transfer learning process. This comprehensive library facilitates seamless integration of deep learning methodologies, making it suitable for implementing complex neural network architectures and training procedures. The networks discussed in Section 2.3 are used here. These pre-trained networks are trained on large datasets such as ImageNet dataset and others. When the sickle cell dataset was introduced as input to the model, the weights and biases in the pre-trained network underwent a dynamic update process. This adjustment occurred according to the specific features present in the sickle cell dataset. Essentially, the pre-trained network served as a feature extractor or a knowledge source, enabling the model to adapt to the characteristics of the custom sickle cell dataset. To fine-tune the model for the task at hand, the early layers of the pre-trained model, which had learned generic features applicable to a wide range of images, were typically frozen. Freezing these layers prevents them from further updates during training. On the other hand, the later layers, including the convolutional layer and fully connected layer, known as the learnable layers, were replaced with task-specific layers. These layers were designated to capture features relevant to sickle cell classification. During the training process, the weights of these learnable layers were iteratively updated to better align with the characteristics of the sickle cell dataset. This dynamic updating allowed the model to learn task-specific features while still benefiting from the general knowledge encoded in the pre-trained layers. The hyperparameters are the one that control all these processes to get the best result by optimization. The parameters that can be used are maximum epoch, minimum batch size, initial learn rate, and optimizer.
2.5 Training
The training phase of deep learning is critical since models learn from data and modify their parameters. These are known as hyperparameters and govern the behaviour and performance of the learning algorithm and the model. Unlike the parameters of a model, which are learned through training, hyperparameters are manually set by the practitioner or determined through a search process. They define the structure and configuration of the learning algorithm and influence how the model learns and generalizes from the data [33].
The various hyperparameters compared in this study are: initial learn rate, minimum batch size, maximum epochs, and optimizer.
Initial learning rate represents the scale of the modification applied to the model’s parameters in each iteration of the optimization process. Additionally, it signifies the extent to which the model has absorbed information from the provided data. A too high learning rate may cause the model to overshoot the minimum, while a too low learning rate can slow down convergence or cause the model to get stuck in a local minimum. The preferred value for the classification task is 0.001. The second parameter, i.e. minimum batch size determines the number of training samples used in each training iteration. It affects the trade-off between computation efficiency and gradient accuracy. It depends on the availability of the GPU system. We perform trials on three different batch sizes: 32, 64, and 128. Larger batch sizes can provide computational efficiency, but smaller batches may offer better generalization. We compare these on different networks to know whether the batch size affects the performance. Maximum epochs is how often the complete dataset is fed through the model for training purposes. Too few epochs may result in underfitting, while too many epochs may lead to overfitting. We also try to compare the effect of epoch on the performance by keeping the other parameters constant. The model was configured to train utilizing the sgdm solver, which stands for Stochastic Gradient Descent with Momentum. This choice offers the benefit of hastened convergence, a smoother optimization path, and resilience to noise. It is particularly valuable in deep learning scenarios requiring gradient-based optimization.
The experimentation setup involved the utilization of a GPU system with CUDA Device (NVIDIA T400 4GB), resulting in a notable reduction in training time.
Each hyperparameter is analysed in terms of its impact on the training process, convergence, and model performance. All the three models were tried on the same hyperparameters and then compared based on their performance parameters. The performance parameters are calculated in Section 2.7.
2.6 Evaluation metrics
To evaluate the performance of the model for each class, we calculate the following parameters,
The ability of a model to identify true positives and true negatives correctly is termed as accuracy. The capability to accurately gauge the model’s recognition of positive instances is termed precision. Recall, on the other hand, evaluates the model’s competence in identifying all positive instances. Therefore, an effective model should exhibit high values for both of these metrics, much like an ideal situation. The F1 score provides a well-rounded compromise between precision and recall, signifying that there exists a commendable equilibrium between reducing false positives and false negatives by the model [41–43].
2.7 GRAD-CAM
In traditional AI models, such as deep learning neural networks, the decision-making process can be complex and opaque, making it difficult for humans to understand how and why a particular conclusion was reached. Hence, XAI techniques have been used for transparency and interpretability. In this study, we use Grad-CAM for the visualization [44–47]. It highlights the regions of an input image that contribute most to the model’s prediction for a particular class. Grad-CAM provides interpretability by generating heatmaps that visually highlight the regions of an image that are crucial for the model’s decision. This can help clinicians and researchers understand which parts of an image contribute to the classification of sickle cell, providing insights into the model’s decision-making process. The categorization of sickle cell may encompass nuanced features that are challenging for human observers to identify readily. Grad-CAM has the capability to emphasize these nuanced features, aiding in the revelation of patterns and traits within images that signify the presence of sickle cell. The challenges faced when using XAI with sickle cell is that the deep learning models used for image classification for sickle cell often have complex architectures. Explaining the decision-making process of intricate models can be challenging, and interpreting the significance of each parameter becomes more difficult as the complexity increases. Certain advanced machine learning models, especially deep neural networks, are often considered “black box” models. They make predictions based on complex interactions that are not easily interpretable. In spite of the challenges, Grad-CAM-produced heatmaps serve as a means of communication between data scientists and healthcare professionals. The visualizations may be more easily comprehensible for clinicians compared to the direct model outputs, promoting collaboration and facilitating the seamless integration of AI models into medical workflows.
3 Results
The utilization of the transfer learning approach enabled the model to harness the acquired features from the pre-trained network. These features typically excel at capturing low-level image characteristics and could be fine-tuned for the particular task of sickle cell classification. To check whether the model is performing well, there are certain standards that need to be calculated in order to evaluate the performance. The values for epochs, and minimum batch size were provided to the networks. Sensitivity represents the classification parameter that enables the assessment of a model’s capability to correctly identify true positives within each existing class. To gauge sensitivity, it becomes essential to divide the cumulative count of true positives and false negatives by the count of true positives. The algorithm’s true negative rate, also known as specificity, aids in identifying all negative classes accurately classified by the algorithm. Another parameter, specificity, can be described as the proportion of true negatives in relation to the combined total of true negatives and false positives. Precision evaluates the correctness of all true positive predictions in relation to the total predictions made. It quantifies the ratio of true positives to the sum of all positive predictions. Finally, the most important value, i.e. accuracy metric can be computed by dividing the count of accurately categorized predictions by the overall count of predictions. Table 1 shows the parameters obtained from the trained network. It gives the values for all the three networks worked on. It can be observed that in some cases, the accuracy might be high, but there is a decrease in other metrices, making it unsuitable. The model where all of the values are optimum is considered to be the best of the lot.
Network | Epoch | Min batch size | Learn rate | Sensitivity | Specificity | Precision | F1-Score | Accuracy |
---|---|---|---|---|---|---|---|---|
ResNet50 | 30 | 32 | 0.001 | 0.95 | 0.9103 | 0.7308 | 0.8261 | 0.9184 |
GoogLeNet | 30 | 32 | 0.001 | 0.9375 | 0.8659 | 0.5769 | 0.7143 | 0.8776 |
ResNet18 | 30 | 32 | 0.001 | 0.5769 | 1.0000 | 1.0000 | 0.7317 | 0.8878 |
ResNet50 | 50 | 32 | 0.001 | 0.9412 | 0.8765 | 0.6154 | 0.7442 | 0.8878 |
GoogLeNet | 50 | 32 | 0.001 | 1 | 0.8471 | 0.5 | 0.6667 | 0.8673 |
ResNet18 | 50 | 32 | 0.001 | 0.5000 | 1.0000 | 1.0000 | 0.6667 | 0.8673 |
ResNet50 | 100 | 32 | 0.001 | 0.9474 | 0.8987 | 0.6923 | 0.8 | 0.9082 |
GoogLeNet | 100 | 32 | 0.001 | 0.8947 | 0.8861 | 0.6538 | 0.7556 | 0.7556 |
ResNet18 | 100 | 32 | 0.001 | 0.5000 | 1.0000 | 1.0000 | 0.6667 | 0.8673 |
ResNet50 | 30 | 64 | 0.001 | 0.9167 | 0.9459 | 0.8462 | 0.88 | 0.9388 |
GoogLeNet | 30 | 64 | 0.001 | 1 | 0.878 | 0.6154 | 0.7619 | 0.898 |
ResNet18 | 30 | 64 | 0.001 | 0.5769 | 1.0000 | 1.0000 | 0.7317 | 0.8878 |
ResNet50 | 50 | 64 | 0.001 | 0.9167 | 0.9459 | 0.8462 | 0.88 | 0.9388 |
GoogLeNet | 50 | 64 | 0.001 | 1 | 0.878 | 0.6154 | 0.7619 | 0.898 |
ResNet18 | 50 | 64 | 0.001 | 0.5769 | 1.0000 | 1.0000 | 0.7317 | 0.8878 |
ResNet50 | 100 | 64 | 0.001 | 0.9167 | 0.9459 | 0.8462 | 0.88 | 0.9388 |
GoogLeNet | 100 | 64 | 0.001 | 1 | 0.8372 | 0.4615 | 0.6316 | 0.8571 |
ResNet18 | 100 | 64 | 0.001 | 0.5769 | 1.0000 | 1.0000 | 0.7317 | 0.8878 |
ResNet50 | 30 | 128 | 0.001 | 0.92 | 0.9589 | 0.8846 | 0.902 | 0.949 |
GoogLeNet | 30 | 128 | 0.001 | 0.9286 | 0.8452 | 0.5 | 0.65 | 0.8571 |
ResNet18 | 30 | 128 | 0.001 | 0.6154 | 1.0000 | 1.0000 | 0.7619 | 0.8980 |
ResNet50 | 50 | 128 | 0.001 | 0.8846 | 0.9583 | 0.8846 | 0.8846 | 0.9388 |
GoogLeNet | 50 | 128 | 0.001 | 0.9333 | 0.8554 | 0.5385 | 0.5385 | 0.8673 |
ResNet18 | 50 | 128 | 0.001 | 0.5769 | 1.0000 | 1.0000 | 0.7317 | 0.8878 |
ResNet50 | 100 | 128 | 0.001 | 0.8846 | 0.9583 | 0.8846 | 0.8846 | 0.9388 |
GoogLeNet | 100 | 128 | 0.001 | 0.9333 | 0.8554 | 0.5385 | 0.5385 | 0.8673 |
ResNet18 | 100 | 128 | 0.001 | 0.6154 | 1.0000 | 1.0000 | 0.7619 | 0.8980 |
To find the optimum model out of the three, we compared the highest accuracy obtained from the values in Table 1. Each of the graph in Figure 8 is a plot between the accuracy and the model with maximum epoch and minimum batch size as constant. Keeping the external variables constant, we can get a clear view based on the accuracies that which models perform. With all the permutations and combinations, a total of nine graphs were obtained.
Out of all nine graphs shown, it can be observed that all of them show the accuracy of ResNet50 to be giving the best results. With the help of this analysis, it can be concluded that ResNet50 is to be selected for further testing the model.
Based on the abovementioned, the upper-level selection, i.e. the type of model that should be considered was decided. Further, based on these results, we select the best trial indicating the optimum epoch and batch size.
Under the ResNet50 set, there were nine trials that were performed. The best trial out of the nine ResNet50 trials was obtained with 30 epochs and a batch size of 128. We observed the training progress plots of this particular network to get glimpse of its performance. Training and validation plots are significant tools in the evaluation and analysis of machine learning models, particularly in the context of deep learning. These plots provide valuable insights into the performance, convergence, and generalization capabilities of a model during the training process. The training loss plot illustrates how well the model is learning from the training data over successive epochs. A decreasing training loss indicates that the model is converging and learning the patterns in the data. The validation loss plot complements the training loss by showing the model’s performance on unseen data. Monitoring both training and validation losses helps identify whether the model is overfitting or underfitting.
When a model performs well on the training data but poorly on validation data, it may be overfitting. A training plot that shows decreasing loss while the validation plot shows increasing or stagnating loss indicates potential overfitting. Conversely, if both training and validation losses are high and show minimal improvement, the model may be underfitting. Training and validation plots help diagnose these issues and guide adjustments to the model architecture or training process. The line in blue is the training accuracy, which has progressively reached 100%. Along with that is a black dotted line which is the validation accuracy, and the points are the validation points where the model has performed validation. The frequency of validation is set to the number of epochs. Similarly, the second graph is the training and validation loss plot, which basically means the error the model is producing. If the value is high, it is an indication of overfitting. Figures 9 and 10 show the training/validation accuracy and loss plot.
To quantitatively analyse the performace, we plot confusion matrix for all the trials with same parameters. We tested the model to get this matrix, which can give the exact numbers of predicted class. A confusion matrix is a fundamental tool in the evaluation of the performance of a classification model. It provides a comprehensive and detailed breakdown of the model’s predictions and actual outcomes, allowing for a deeper understanding of how well the model is performing across different classes. It has four sections, true positive (TP), true negative (TN), false positive (FP), false negative (FN). TP and TN values represent the instances where the model correctly predicted the positive and negative classes, respectively. The sum of TP and TN divided by the total number of instances gives the accuracy of the model. Accuracy is a basic measure of overall model performance. FP and FN on the other hand are values highlighting instances where the model made incorrect predictions. FPs represent cases where the model predicted positive, but the actual class is negative, while FNs represent cases where the model predicted negative, but the actual class is positive. Understanding these errors is crucial for refining the model and addressing specific challenges. For example, in (s), it indicates, out of a total of 26 other classes, 23 were predicted as correct, i.e other, while 3 were misclassified as sickle. Similarly, out of 72 sickle cells, 70 were classified correct while 2 were missclassified as other. Figure 11, shows one confusion matrix obtained of each trial.
With respect to these confusion matrix values, the performance evaluation table was calculated using the equations (1) –(3). It can be observed that for ResNet50, not only the accuracy but other parameters are also in a better range than the other two (Table 2).
Model | Class | n (Truth) | n (Classified) | Acc. (%) | Precision | Recall | F1-Score |
---|---|---|---|---|---|---|---|
ResNet50 | Other | 26 | 25 | 94.9 | 0.92 | 0.88 | 0.90 |
Sickle | 72 | 73 | 94.9 | 0.96 | 0.97 | 0.97 | |
GoogLeNet | Other | 26 | 14 | 85.71 | 0.93 | 0.50 | 0.65 |
Sickle | 72 | 84 | 85.71 | 0.85 | 0.99 | 0.91 | |
ResNet18 | Other | 26 | 16 | 89.8 | 1.0 | 0.62 | 0.76 |
Sickle | 72 | 82 | 89.8 | 0.88 | 1.0 | 0.94 |
As per the standards, the training accuracy is the accuracy while the model is being trained. At this stage, the model is being fitted to the dataset, hence the accuracy at the stage should ideally be 100%. Similarly, validation accuracy is when the model validates what it has learned. This is checked in specified intervals and ideally should be 100% but practically is always less than the training accuracy. Finally, the test accuracy is when the model performs on the unseen data. Practically, this will be less than that of a validation accuracy. Figure 12 shows the training, testing, and validation accuracy of the best network obtained.
3.1 XAI
XAI, when applied to the classification of SCD, allows us to discern the key regions in a blood smear image that significantly influences the model’s decision regarding whether the cells appear normal or sickle-shaped. This transparency is essential for instilling confidence in the model and ensuring its dependability for clinical applications. It operates by computing the gradients of a deep learning model’s output concerning the activation values in the last convolutional layer. These gradients signify the contribution of each pixel in the feature map to the ultimate classification decision. By weighting and superimposing these gradients onto the original image, Grad-CAM generates a heatmap, showcasing the specific regions that played a significant role in influencing the model’s prediction. Figures 13 and 14 show few of the Grad images for sickle cell and non-sickle image. The XAI technique was chosen for the best model (ResNet50 with 30 epochs and 128 batch size). It can be observed that, for regions near the presence of sickle-shaped cells, the heat map gives a higher value. The areas in the image where the heatmap has higher values are the regions that the network found most relevant for predicting the image as a sickle cell image. These are the parts of the image that likely contributed the most to the network’s decision, having those features which are common to all the sickle cell images in the dataset.
Similar to the classification of the other normal class of the dataset, it can be observed that the values of the heat map are high in almost 60–70% of the image. This is because for normal cells, there are no specific abnormality in the pattern that will be seen to consider it as a unique feature.
4 Discussion
Sickle cell is an auto-immune disease, which causes problems like reduced oxygen carrying capacity and making a person weak, paralyzed, or even die if severe. This is due to the changes in the shape and morphology of the RBCs. When it comes to its diagnosis, it becomes a tedious job for the pathologist to manually work on this. Second, it is seen to have caused errors and also might be subjective. Especially, the different orientations cause the subjectivity or human errors. AI algorithms can quickly and accurately analyse blood images, leading to faster diagnosis and treatment. SCD is a complex disease with varying severity and phenotypes. AI can analyse large datasets of clinical data to identify patterns and predict individual patient outcomes. They can capture subtle morphological changes in red blood cells that may be missed by the human eye. This information can be used to tailor treatment plans to each patient’s specific needs, improving the effectiveness of therapy and reducing side effects.
Deep learning models like GoogLeNet, ResNet18, and ResNet50 are the pretrained models that have been used here, which based on studies have shown to give accurate results for medical data classifications. On performing the experimentation, it was observed that ResNet50 showed the best results among the three networks. This percentage was 94.90%. Various values that were calculated with the confusion matrix obtained were seen to be near the ideal value. The experimentation was performed on different parameters such as epoch and batch size. This was beneficial to select the best optimum model. Increasing the number of epochs can lead to errors in the testing time. This is because of overfitting, which means the model performs well on the training set but fails to do so on a new set. Similarly, there is a reverse effect also, where if the number of epoch is very less, it leads to underfitting. Here it fails to learn the entire data unlike over learning in overfitting. The selection of an ideal number of epochs is crucial. In our experimentation, we explored three values: 30, 50, and 100, to determine the most suitable option. It resulted in giving good results within 30 epochs. This is also beneficial as it will further reduce the time required for the entire process to complete. Similarly, if the batch size is very low, there are chances of introduction of noises in between. Increasing the batch size required a system with good computational power. In this particular experimentation, A GPU system was used. Hence, a batch size of 128 gave good results. For the confirmation of the results, XAI gives test XAI was used. Among various types, Grad-CAM is used, which is a heat map type of plot. The red highlighted areas show the features that have led to the classification of that particular image to a class. Healthcare professionals and patients need to trust the decisions made by AI systems, especially when those decisions can significantly impact patient care. XAI allows them to understand the reasoning behind the predictions, providing reassurance and fostering trust in the technology. Transparency builds confidence in the fairness and accountability of the AI system, reducing concerns about bias or discrimination in its decision-making process.
Alzubaidi et al. used features extractor followed by error-correcting output codes (ECOC) classifier when they tried to classify sickle cells [48]. The result of their accuracy reached upto 92.06% which is lesser than what is discussed in this work with a simpler method. Similarly, Aliyu et al. tried to classify red blood cells in sickle cell anaemia by identifying abnormalities in the RBC [49]. The dataset used here was collected from 130 SCA patients and then cells were automatically cropped from them making it a total of 9,000 images. Though the accuracy was around 95%, the datasets used were technical replicates, hence reducing the generality. Bheem Sen et al. used machine learning for the same application. Tough the processing time was less, it included an extra feature extraction step. The accuracy they obtained was about 90% for Logistic regression and SVM classifier and 92% for Random.
Table 3 shows a comparison of the state of art studies along with limitation with respect to the proposed model
Author, ref. | Methodology | Results | Comparison with this model |
---|---|---|---|
Alzubaidi et al. [48] | Feature extractor followed by ECOC | 92.06% | Lesser accuracy than proposed |
Aliyu et al. [49] | By identifying abnormalities using deep learning | 95% | Technical replicates, reducing generality. Accuracy is comparable to the proposed method here |
Bheem Sen et al. [50] | Machine learning | 90% for logistic regression and 92% for SVM classifier | Machine learning required extra feature extraction step. And lesser accuracy |
Our Proposed method | Transfer learning with three models (Resnet50, Resnet18 and GoogLeNet) | 94.90% by ResNet50 | Transfer learning with ResNet50, ResNet18 and GoogLeNet models, which is an easy approach to train the model |
Novelty using Grad-CAM as a method for confidence in classification |
The proposed method employs a transfer learning approach for sickle cell classification, addresses key issues, utilizes a generalized dataset, introduces XAI to the field, and conducts a comparative study with widely recognized deep learning models.
It utilizes a transfer learning approach which is advantageous because pre-trained models have learned rich feature representations from diverse datasets like ImageNet. Fine-tuning these models for the specific task of sickle cell classification requires less data and computational resources compared to training a model from scratch.
The method aims to address various challenges commonly encountered in medical image classification, such as limited labelled data, model generalization, and interpretability.
The proposed method introduces XAI as a novel concept in the context of sickle cell classification. XAI focuses on making complex machine learning models interpretable and understandable. While deep learning models are known for their powerful capabilities, their black-box nature can be a limitation, especially in medical applications where interpretability is crucial. Based on the literature review, XAI has not been extensively used in studies related to sickle cell classification. The inclusion of XAI in this method represents a novel contribution, offering insights into the decision-making process of the deep learning models, which can enhance trust and acceptance among healthcare professionals.
These elements collectively contribute to the method’s potential for providing accurate and interpretable solutions for sickle cell classification while considering the challenges inherent in medical imaging tasks.
One of the primary limitations of the model is its restriction to a specific magnification level (100×). Since it has not been trained on various magnifications, its applicability may be constrained in scenarios where different magnification levels are crucial for accurate predictions. The lack of training data across different magnifications raises concerns about potential biases in the model’s predictions. It might exhibit a preference for features or patterns commonly found in 100× magnification images, leading to less reliable outcomes in other contexts. As with any machine learning model, the performance is heavily dependent on the quality and quantity of the training data. The model’s effectiveness could be compromised if not provided with a sufficiently diverse and representative dataset.
To enhance the model’s versatility and improve its generalization across various magnifications, future work should focus on acquiring and incorporating a more extensive and diverse dataset. Training the model on images from different magnifications will contribute to its adaptability in real-world scenarios. Employing advanced data augmentation techniques can be explored to artificially increase the diversity of the training dataset. This approach could involve simulating different magnifications and variations in image characteristics to enhance the model’s ability to handle a broader scope of input data.
5 Conclusion
With the latest technologies coming up in different fields, AI has been heavily used in medical applications. Deep learning has also taken its place to assist pathologists with diagnosis. A highly robust model can eventually ease out the job of a pathologist. Hence, we use pre-trained transfer learning techniques to predict sickle cell diagnosis in this study. This study proposed a deep learning technique to detect the presence of sickle cells in a blood smear and classify them using three different networks. When comparing all the three models (GoogLeNet, ResNet18, and ResNet50), ResNet50 gave the highest accuracy of 94.90%. To understand the predictions, an XAI technique named Grad-CAM was implemented. This helps in the transparency and interpretation of results. Additionally, the model can be adjusted if the features identified do not align to the actual case.
The theoretical implications of this research lie in the advancement of deep learning methodologies for cell categorization, demonstrating substantial promise in revolutionizing the scrutiny of cells and the identification of medical conditions. The practical implications extend to the potential transformation of diagnostic accuracy and efficiency through ongoing progress in advanced deep learning methods and collaborative efforts between AI specialists and domain-specific professionals.
This study contributes by introducing a novel deep learning technique for sickle cell detection, offering a comparative analysis of three distinct networks, and incorporating the Grad-CAM XAI technique to enhance result transparency and interpretability. The practical advantages of our approach include the development of a highly accurate model, ResNet50, for sickle cell diagnosis, potentially easing the burden on pathologists and improving diagnostic efficiency.
Despite these advancements, certain challenges persist, including the need for rigorous validation, addressing class imbalance issues, managing interpretability concerns, and ensuring compliance with regulatory requirements. Ethical considerations, such as safeguarding patient privacy, mitigating bias, ensuring fairness, and maintaining transparency, require careful attention in the application of deep learning for cell classification. Additionally, dataset generality is an important concern which needs to be tackled.
Looking ahead, future research endeavours should focus on exploring methodologies for rigorous validation, addressing class imbalance issues, and resolving interpretability concerns in deep learning-based cell classification. It should also try investigating ethical aspects and developing frameworks to ensure patient privacy, mitigate bias, promote fairness, and enhance transparency in AI-driven medical applications. Future research can also focus on establishing rigorous statistical frameworks for evaluating and comparing the performance of different sickle cell classification models. This involves conducting hypothesis tests to determine whether one model variant is statistically superior to another in terms of predictive performance, conducting extensive benchmarking against standard and conventional approaches for sickle cell classification and conducting large-scale studies involving diverse datasets to assess the generalization capabilities of sickle cell classification models. Finally, collaborating with healthcare professionals and AI specialists to continually refine and expand the scope of deep learning techniques in cell classification, keeping pace with evolving technological landscapes and medical demands.
Acknowledgements
We would like to thank Manipal Academy of Higher Education and University Grant Commission, India for giving us a platform to conduct this study.
-
Funding information: The article is funded by Manipal Academy of Higher Education.
-
Author contributions: Conceptualization: N.S.; formal analysis: A.G.; investigation: S.B.; methodology: N.G., N.S., and K.C.; project administration: N.S.; resources: M.G.B. and S.B.; software: N.G. and A.G.; supervision: N.S.; validation: A.G.; visualization: K.C.; writing – original draft: N.G.; writing – review and editing: M.G.B. and K.C.
-
Conflict of interest: No potential competing interest was reported by the authors.
-
Data availability statement: Data will be made available by the corresponding author (Niranjana Sampathila) upon prior request.
References
[1] Kato GJ, Piel FB, Reid CD, Gaston MH, Ohene-Frempong K, Krishnamurti L, et al. Sickle cell disease. Nat Rev Dis Primers. 2018;4(1):1–22.10.1038/nrdp.2018.10Search in Google Scholar PubMed
[2] Buchanan GR, DeBaun MR, Quinn CT, Steinberg MH. Sickle cell disease. ASH Education program Book. Vol. 2004. Issue. 1; 2004. p. 35–47.Search in Google Scholar
[3] Piel FB, Steinberg MH, Rees DC. Sickle cell disease. N Engl J Med. 2017;376(16):1561–73.10.1056/NEJMra1510865Search in Google Scholar PubMed
[4] Bunn HF. Pathogenesis and treatment of sickle cell disease. N Engl J Med. 1997;337(11):762–9.10.1056/NEJM199709113371107Search in Google Scholar PubMed
[5] Rees DC, Williams TN, Gladwin MT. Sickle-cell disease. Lancet. 2010;376(9757):2018–31.10.1016/S0140-6736(10)61029-XSearch in Google Scholar PubMed
[6] Stuart MJ, Nagel RL. Sickle-cell disease. Lancet. 2004;364(9442):1343–60.10.1016/S0140-6736(04)17192-4Search in Google Scholar PubMed
[7] Pauling L, Itano HA, Singer SJ, Wells IC. Sickle cell anemia, a molecular disease. Science (1979). 1949;110(2865):543–8.10.1126/science.110.2865.543Search in Google Scholar PubMed
[8] Lonergan GJ, Cline DB, Abbondanzo SL. Sickle cell anemia. Radiographics. 2001;21(4):971–94.10.1148/radiographics.21.4.g01jl23971Search in Google Scholar PubMed
[9] Kapoor S, Little JA, Pecker LH. Advances in the treatment of sickle cell disease. In Mayo Clinic Proceedings. Elsevier; 2018. p. 1810–24.10.1016/j.mayocp.2018.08.001Search in Google Scholar PubMed
[10] Chaturvedi S, DeBaun MR. Evolution of sickle cell disease from a life‐threatening disease of children to a chronic disease of adults: The last 40 years. Am J Hematol. 2016;91(1):5–14.10.1002/ajh.24235Search in Google Scholar PubMed
[11] Hosseinkhani H, Domb AJ, Sharifzadeh G, Nahum V. Gene therapy for regenerative medicine. Pharmaceutics. 2023;15(3):856.10.3390/pharmaceutics15030856Search in Google Scholar PubMed PubMed Central
[12] Steinberg MH. Management of sickle cell disease. N Engl J Med. 1999;340(13):1021–30.10.1056/NEJM199904013401307Search in Google Scholar PubMed
[13] Claster S, Vichinsky EP. Managing sickle cell disease. Bmj. 2003;327(7424):1151–5.10.1136/bmj.327.7424.1151Search in Google Scholar PubMed PubMed Central
[14] Eaton WA, Bunn HF. Treating sickle cell disease by targeting HbS polymerization. Blood J Am Soc Hematol. 2017;129(20):2719–26.10.1182/blood-2017-02-765891Search in Google Scholar PubMed PubMed Central
[15] Neel JV. The inheritance of sickle cell anemia. Sci (1979). 1949;110(2846):64–6.10.1126/science.110.2846.64Search in Google Scholar PubMed
[16] Rees DC, Gibson JS. Biomarkers in sickle cell disease. Br J Haematol. 2012;156(4):433–45.10.1111/j.1365-2141.2011.08961.xSearch in Google Scholar PubMed
[17] Buchanan GR, DeBaun MR, Quinn CT, Steinberg MH. Sickle cell disease. ASH Education Program Book. Vol. 2004. Issue. 1; 2004. p. 35–47.10.1182/asheducation-2004.1.35Search in Google Scholar PubMed
[18] Khan S, Yairi T. A review on the application of deep learning in system health management. Mech Syst Signal Process. 2018;107:241–65.10.1016/j.ymssp.2017.11.024Search in Google Scholar
[19] Badnjević A, Avdihodžić H, Gurbeta Pokvić L. Artificial intelligence in medical devices: Past, present and future. Psychiatr Danub. 2021;33(suppl 3):101–6.10.5005/sar-1-1-2-101Search in Google Scholar
[20] Pacis DMM, Subido EDC, Bugtai NT. Trends in telemedicine utilizing artificial intelligence. In AIP Conference Proceedings. AIP Publishing; 2018.10.1063/1.5023979Search in Google Scholar
[21] Ahmed Z, Mohamed K, Zeeshan S, Dong X. Artificial intelligence with multi-functional machine learning platform development for better healthcare and precision medicine. Database. 2020;2020:baaa010.10.1093/database/baaa010Search in Google Scholar PubMed PubMed Central
[22] Busnatu ȘS, Niculescu AG, Bolocan A, Andronic O, Pantea Stoian AM, Scafa-Udriște A, et al. A review of digital health and biotelemetry: modern approaches towards personalized medicine and remote health assessment. J Pers Med. 2022;12(10):1656.10.3390/jpm12101656Search in Google Scholar PubMed PubMed Central
[23] Hussain SM, Buongiorno D, Altini N, Berloco F, Prencipe B, Moschetta M, et al. Shape-based breast lesion classification using digital tomosynthesis images: the role of explainable artificial intelligence. Appl Sci. 2022;12(12):6230.10.3390/app12126230Search in Google Scholar
[24] Kakogeorgiou I, Karantzalos K. Evaluating explainable artificial intelligence methods for multi-label deep learning classification tasks in remote sensing. Int J Appl Earth Observ Geoinf. 2021;103:102520.10.1016/j.jag.2021.102520Search in Google Scholar
[25] Yenilmez B, Knowlton S, Yu CH, Heeney MM, Tasoglu S. Label‐free sickle cell disease diagnosis using a low‐cost, handheld platform. Adv Mater Technol. 2016;1(5):1600100.10.1002/admt.201600100Search in Google Scholar
[26] Elsabagh AA, Elhadary M, Elsayed B, Elshoeibi AM, Ferih K, Kaddoura R, et al. Artificial intelligence in sickle disease. Blood Rev. 2023;61:101102.10.1016/j.blre.2023.101102Search in Google Scholar PubMed
[27] Shaikho EM, Farrell JJ, Alsultan A, Qutub H, Al-Ali AK, Figueiredo MS, et al. A phased SNP-based classification of sickle cell anemia HBB haplotypes. BMC Genomics. 2017;18(1):1–7.10.1186/s12864-017-4013-ySearch in Google Scholar PubMed PubMed Central
[28] Cai S, Han IC, Scott AW. Artificial intelligence for improving sickle cell retinopathy diagnosis and management. Eye. 2021;35(10):2675–84.10.1038/s41433-021-01556-4Search in Google Scholar PubMed PubMed Central
[29] de Haan K, Ceylan Koydemir H, Rivenson Y, Tseng D, Van Dyne E, Bakic L, et al. Automated screening of sickle cells using a smartphone-based microscope and deep learning. NPJ Digit Med. 2020;3(1):76.10.1038/s41746-020-0282-ySearch in Google Scholar PubMed PubMed Central
[30] Elsalamony HA. Healthy and unhealthy red blood cell detection in human blood smears using neural networks. Micron. 2016;83:32–41.10.1016/j.micron.2016.01.008Search in Google Scholar PubMed
[31] Alzubaidi L, Fadhel MA, Al-Shamma O, Zhang J, Duan Y. Deep learning models for classification of red blood cells in microscopy images to aid in sickle cell anemia diagnosis. Electron (Basel). 2020;9(3):427.10.3390/electronics9030427Search in Google Scholar
[32] Pasupa K, Vatathanavaro S, Tungjitnob S. Convolutional neural networks based focal loss for class imbalance problem: a case study of canine red blood cells morphology classification. J Ambient Intell Humaniz Comput. 2020;14:1–17.10.1007/s12652-020-01773-xSearch in Google Scholar
[33] Krishna ST, Kalluri HK. Deep learning and transfer learning approaches for image classification. Int J Recent Technol Eng (IJRTE). 2019;7(5S4):427–32.Search in Google Scholar
[34] Goswami NG, Goswami A, Sampathila N, Bairy GM. Sickle Cell Classification Using Deep Learning. In 2023 3rd International Conference on Intelligent Technologies (CONIT). IEEE; 2023. p. 1–6.10.1109/CONIT59222.2023.10205802Search in Google Scholar
[35] Khanna M, Agarwal A, Singh LK, Thawkar S, Khanna A, Gupta D. Radiologist-level two novel and robust automated computer-aided prediction models for early detection of COVID-19 infection from chest X-ray images. Arab J Sci Eng. 2023;48(8):11051–83.10.1007/s13369-021-05880-5Search in Google Scholar PubMed PubMed Central
[36] Khanna M, Singh LK, Thawkar S, Goyal M. Deep learning based computer-aided automatic prediction and grading system for diabetic retinopathy. Multimed Tools Appl. 2023;82:1–48.10.1007/s11042-023-14970-5Search in Google Scholar
[37] Manescu P, Bendkowski C, Claveau R, Elmi M, Brown BJ, Pawar V, et al. A weakly supervised deep learning approach for detecting malaria and sickle cells in blood films. In Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23. Springer; 2020. p. 226–35.10.1007/978-3-030-59722-1_22Search in Google Scholar
[38] Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, et al. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Boston; 2015. p. 1–9.10.1109/CVPR.2015.7298594Search in Google Scholar
[39] Targ S, Almeida D, Lyman K. Resnet in resnet: Generalizing residual architectures. arXiv preprint arXiv:1603.08029; 2016.Search in Google Scholar
[40] Sarwinda D, Paradisa RH, Bustamam A, Anggia P. Deep learning in image classification using residual network (ResNet) variants for detection of colorectal cancer. Procedia Comput Sci. 2021;179:423–31.10.1016/j.procs.2021.01.025Search in Google Scholar
[41] Fränti P, Mariescu-Istodor R. Soft precision and recall. Pattern Recognit Lett. 2023;167:115–21.10.1016/j.patrec.2023.02.005Search in Google Scholar
[42] Derczynski L. Complementarity, F-score, and NLP evaluation. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16); 2016. p. 261–6.Search in Google Scholar
[43] Goutte C, Gaussier E. A probabilistic interpretation of precision, recall and F-score, with implication for evaluation. In European Conference on Information Retrieval. Springer; 2005. p. 345–59.10.1007/978-3-540-31865-1_25Search in Google Scholar
[44] Samek W, Montavon G, Vedaldi A, Hansen LK, Müller K-R. Explainable AI: interpreting, explaining and visualizing deep learning. Vol. 11700. Springer Nature; 2019.10.1007/978-3-030-28954-6Search in Google Scholar
[45] Gunning D, Stefik M, Choi J, Miller T, Stumpf S, Yang G-Z. XAI—Explainable artificial intelligence. Sci Robot. 2019;4(37):eaay7120.10.1126/scirobotics.aay7120Search in Google Scholar PubMed
[46] Holzinger A, Saranti A, Molnar C, Biecek P, Samek W. Explainable AI methods-a brief overview. In International Workshop on Extending Explainable AI Beyond Deep Models and Classifiers. Springer; 2020. p. 13–38.10.1007/978-3-031-04083-2_2Search in Google Scholar
[47] Xu F, Uszkoreit H, Du Y, Fan W, Zhao D, Zhu J. Explainable AI: A brief survey on history, research areas, approaches and challenges. In Natural Language Processing and Chinese Computing: 8th CCF International Conference, NLPCC 2019, Dunhuang, China, October 9–14, 2019, Proceedings, Part II 8. Springer; 2019. p. 563–74.10.1007/978-3-030-32236-6_51Search in Google Scholar
[48] Alzubaidi L, Al-Shamma O, Fadhel MA, Farhan L, Zhang J. Classification of red blood cells in sickle cell anemia using deep convolutional neural network. In Intelligent Systems Design and Applications: 18th International Conference on Intelligent Systems Design and Applications (ISDA 2018) held in Vellore, India, December 6-8, 2018. Vol. 1. Springer; 2020. p. 550–9.10.1007/978-3-030-16657-1_51Search in Google Scholar
[49] Aliyu HA, Razak MAA, Sudirman R, Ramli N. A deep learning AlexNet model for classification of red blood cells in sickle cell anemia. Int J Artif Intell. 2020;9(2):221–8.10.11591/ijai.v9.i2.pp221-228Search in Google Scholar
[50] Sen B, Ganesh A, Bhan A, Dixit S. Deep Learning based diagnosis of sickle cell anemia in human RBC. 2021 2nd International Conference on Intelligent Engineering and Management (ICIEM). London, United Kingdom; 2021, P. 526–9. 10.1109/ICIEM51511.2021.9445293.Search in Google Scholar
© 2024 the author(s), published by De Gruyter
This work is licensed under the Creative Commons Attribution 4.0 International License.