Abstract
Background
COVID-19 is a novel virus that affects the upper respiratory tract, as well as the lungs. The scale of the global COVID-19 pandemic, its spreading rate, and deaths are increasing regularly. Computed tomography (CT) scans can be used carefully to detect and analyze COVID-19 cases. In CT images/scans, ground-glass opacity (GGO) is found in the early stages of infection. While in later stages, there is a superimposed pulmonary consolidation.
Methods
This research investigates the quantum machine learning (QML) and classical machine learning (CML) approaches for the analysis of COVID-19 images. The recent developments in quantum computing have led researchers to explore new ideas and approaches using QML. The proposed approach consists of two phases: in phase I, synthetic CT images are generated through the conditional adversarial network (CGAN) to increase the size of the dataset for accurate training and testing. In phase II, the classification of COVID-19/healthy images is performed, in which two models are proposed: CML and QML.
Result
The proposed model achieved 0.94 precision (Pn), 0.94 accuracy (Ac), 0.94 recall (Rl), and 0.94 F1-score (Fe) on POF Hospital dataset while 0.96 Pn, 0.96 Ac, 0.95 Rl, and 0.96 Fe on UCSD-AI4H dataset.
Conclusion
The proposed method achieved better results when compared to the latest published work in this domain.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
Introduction
COVID-19 is a global pandemic. It has rapidly become a severe public health-related problem worldwide. COVID-19 is a zoonotic infection that is believed to be transmitted from bats to humans [1, 2]. Mild symptoms of COVID-19 appear in 82% of the cases. Out of 93,194,922 worldwide COVID-19 cases, 20,014,729 led to death [3]. The major symptoms are cough, fever, and dyspnea. In severe cases, it can cause pneumonia, septic shock, failure of multiple organs, and even death. The infection rate is higher in males compared to females. No death was reported among children 0–9 years old [4]. Based on the latest published guidelines by the Chinese government, COVID-19 diagnosis is confirmed through gene sequencing of the blood, RT-PCR of the respiratory samples, a key indicator for hospitalization. Currently, the low sensitivity of RT-PCR leads to several COVID-19 cases going unidentified which can in turn prohibit them from getting the proper treatment [5, 6]. The COVID-19 virus is highly infectious and can spread quickly especially when diagnosed late. The delay in diagnosis and medical treatment can lead to a severe stage of pneumonia, and permanent lung damage that reduces the patient’s survival rate [7]. According to the World Health Organization, COVID-19 may cause permanent lung parenchymal damage, similar to SARS, with a honeycomb-like appearance. It can also cause damage to various organs in the human body. It is believed to spread through infected droplets produced by coughing and exhaling. Computed tomography (CT) is a non-invasive imaging technique for detecting lung involvement and determining the severity of COVID-19 [8].
Artificial intelligence (AI) approaches are commonly used for the analysis of medical imaging [9,10,11,12,13,14,15,16,17,18,19,20,21,22,23]. Deep learning (DL) methodologies extract high-level features in the form of a pipeline that learns complicated patterns. The inceptionv3 [24] and VGG-16 pre-trained models with SVM are utilized for viral, bacterial, and healthy image classification [25]. AI is categorized into two sub-groups as ML and DL. DL is an advanced form of classical ML [26]. DL approaches can be used to diagnose COVID-19, its methodologies named as the recurrent neural network (RNN) [27], conditional generative adversarial networks (CGAN) [28], autoencoder (AE) [29], and hybrid approaches such as RNN-CNN [30] and AE-CNN[31] have been used to detect COVID-19 [32, 33].
Extensive work has been carried out in the literature to detect COVID-19. This work has certain limitations in the classification of CT slices due to the limited data availability and the complex features analysis. Therefore, the main objective of this study is to construct a new framework to overcome the existing limitations. Hence an improved CGAN model is proposed for synthetic image creation that to improve classification accuracy. Next complex features analysis has been performed, that has not been implemented in the previous literature. The complicated patterns in the images have been learned using two proposed models trained from scratch. The first model is based on the convolutional neural network. It is constructed by the combination of the optimum structure of the layers with selected learning parameters. This enhances COVID-19 images classification accuracy. The second model is based on the quantum neural network. It has been constructed on a 4-qubit quantum circuit. We select the number of layers with optimum learning parameters to provide significant improvement in the learning of the complex patterns.
The following steps are the primary contribution of the proposed models:
-
1.
A modified-CGAN model is designed with selected learning parameters for synthetic CT image generation/creation. The synthetic generated images are similar to the real CT images. This is useful for model training/testing that directly impacts the system accuracy.
-
2.
The second main contribution of this research is to propose two models trained from scratch to analyze COVID-19 CT images such as CML and QNN. The proposed models use selected hyperparameters and layers with different activation units for COVID-19 classification.
The article organization is stated as follows: “Related Works” covers similar work, “Proposed Architectures” outlines the architectures suggested, “Results and Discussion” addresses the experimental effects, and the conclusion is found in “Conclusion.”
Related Works
The worldwide pandemic is caused by COVID-19 virus [34]. The transfer learning models are widely used for classification such as squeeze Net, mobileNetv2, VGG-19, VGG-16, ResNet-18, 50, 101, and Xception [35,36,37,38,39,40]. The DarkCovidNet model was developed to diagnose COVID-19 [41].
Quantum computing (QC) provides a more comprehensive framework for DL when compared to the classical computing (CC). QC improves the optimization of the underlying objective function. It reduces the training time of deep learning [42,43,44]. The convolutional neural network (CNN) is a classical machine learning model suitable to process the images. The CNN model is based on the idea of the convolutional layers applied to a local convolutional instead of processing the full original data with global function. This idea is further extended in the context of variational quantum circuits. The major difference when compared to classical convolution is that a quantum circuit might produce highly complex kernels whose computation could be classically intractable [45].
In recent years, deep learning is gaining a huge success in medical imaging. Several machine learning approaches are used to predict different health issues from the radiological images. Several ML applications can be used to produce optimal outcomes, but still needs human interference to aid the ML decisions [46]. Machine learning is a pervasive approach in many fields of engineering and sciences. Due to the restricted architecture of classic computers, there is a need to move to quantum computing to develop QML. The purpose of this study is to find more efficient algorithms to manipulate and process visual information in real-time at a very high speed [43, 47].
Previously several classical machine learning approaches are used for COVID-19 classification, but still have limitations. Prominent feature extraction is a challenging task. In this research, CML and QNN models are designed to resolve this issue.
Proposed Architectures
A new paradigm is suggested in this study, which comprises of two stages. In stage I, synthetic data is generated by applying a modified CGAN [48] architecture, where generator and discriminator network are trained by the optimal hyperparameters and are supplied to the two proposed architectures, such as CML and QNN [49], as illustrated in Fig. 1
Conditional Generative Adversarial Network
CGAN is a primary deep network that generates synthetic data having real imaging characteristics. A generative adversarial network (GAN) is constructed by the combination of generator and discriminator. The generator created CT synthetic images that are similar to the input CT images. Similarly, the second network is a discriminator that is used to take data in the form of batches comprising of observations to perform classification. The optimum generator performance is achieved to maximize the discriminator loss, and the optimum discriminator performance is increased to minimize the discriminator loss. The generator creates synthetic data and tricks the discriminator. The discriminator is trained to classify the data into corresponding classes such as real and synthetic images as shown in Fig. 2.
Generative Network
In this network, the projection and reshape layer is utilized to convert the input data into 1-by-1-by-100 arrays of the noise to the 7-by-7-by-128 arrays. Moreover, upsampling is performed with ReLU and batch normalization. The transposed convolution layers are applied with a \(5\times 5\) filter size. Stride 2 is used to crop the output on every edge. At the end-stage, tanh followed through a fully connected layer to reshape the output across the selected size.
Discriminative Network
The discriminator network takes images of dimension \(64\times 64\times 3\). The scalar prediction scores are returned by the series of convolutional, LeakyReLU, batch-normalization, and dropout layers. The parameters are selected for model training as mentioned in Table 1.
The layered architecture of CGAN with the activation units is depicted in Table 2.
Proposed Classification Architectures
The synthetic images are supplied to the CML and QNN models for COVID-19 classification.
Classical Machine Learning
The CML model is proposed for classification which comprises three kinds of layers such as 01 convolutional, 01 flatten, and 02 dense layers as presented in Fig. 3.
The convolutional layers are primary building blocks in CNN, which maps the input images size of \(128\times 128\times 3\) with \(3\times 3\) kernel size and output in the form of activation which is mathematically explained as follows:
where f denotes input images, h represents kernel size, and m, n symbolizes the row and column, respectively. The flatten layer is applied to collapsed spatial input dimensions into channel dimension \(\mathrm{height}\times \mathrm{width}\times \mathrm{channel}\). The dense layer is the layer of a regular neuron of the neural network, in which neurons receive input from all neurons from the preceding layers and connected densely as mentioned in Eq. (2).
The ReLU and softmax activation functions are utilized with the number of neurons such as 13 and 02, respectively. The model is trained on the hyperparameters that are selected after the comprehensive experiment as shown in Table 3.
Quanvolutional Neural Network
A new QNN contains three kinds of layers, such as 4-Qubit-quantum layers, 03-dense layers with specified activation units, and drop-out layers. The quantum layer is added to replace the convolutional layer. The 4-bit quantum is used to generate the 20 \(\times\) 20-dimension quantum images. The quantum generated images are learned in a pipeline; then, a dense layer by ReLU and softmax with different activation units are applied. The model learning parameters are the same as the CML model (see Table 3) for the fair comparison among these two architectures such as CML and QNN.
Quantum Convolution
In the proposed quantum machine learning model, input images are divided into \(2\times 2\) square regions and embedded into the quantum circuit. The quantum images are generated through parametrized rotations with initialized the qubits in the ground state. In this process, the unitary might be created through the quantum variational circuit. Finally, the quantum system is computed by finding a classical list of expectation values. These values are mapped into the variant channel of the output (single-pixel). The process is repeated into variant regions; one might scan the full image and create output with a multi-channel image. Quantum convolutional might be followed through classical or quantum layers. The penny lane qubit device [50, 51] is initialized to simulate a 4 qubits system. The related q-node denotes quantum circuit comprising of rotations \({R}_{y}\) embedded local layer (scaled angle by π factor) and n random circuit layers. In the final computational measurement, 4 expectation values are estimated. In a convolution process, the input images are divided into \(2\times 2\) square pixels and each square is processed through a quantum circuit. Finally, the 4 expectation values are mapped into 04 channels of output (single-pixel). The proposed quantum model is presented in Fig. 4.
Quanvolutional Neural Networks
The quantum generated images are transferred to QNN, where 03 dense layers are used by 60, 500 neurons with ReLU activation and 02 neurons with softmax for features mapping. The 0.5 drop-outs are utilized. The proposed QNN building blocks are presented in Fig. 5.
Results and Discussion
The proposed method is evaluated on two benchmark datasets, in which the UCSD-AI4H/COVID-CT dataset is publicly available that contains 812 COVID-19/non-COVID-19 images of 216 patients [52, 53]. The second dataset is collected from POF Hospital Pakistan, which consists of 4127 healthy and 5421 COVID-19 affected CT images. Furthermore, synthetic images are also generated by employing a CGAN model. Table 4 provides a comprehensive overview of the datasets.
Table 4 shows the description of UCSD-AI4H/COVID-CT and private collected dataset from POF Hospital, where actual images of UCSD-AI4H/COVID-CT are 406 healthy and 406 COVID-19. A total of 812 synthetic images are generated using the CGAN model out of which 406 healthy and 406 COVID-19. A total of 1612 images (actual + synthetic) of UCSD-AI4H/COVID-CT dataset are utilized for model training and testing.
For the second dataset, a total of 9548 synthetic images are generated using the CGAN model out of 4127 healthy and 5421 COVID-19 cases. A total of 19,096 images (actual + synthetic) are utilized for model training and testing.
The total actual and synthetic images are supplied to the training and testing using 0.4 and 0.5 hold-out validation. In 0.4 cross-validations, 60% data utilized for training and 40% for testing. Data is divided randomly into two subsets of 0.5 cross-validation, one for training and the other for testing.
This study is implemented on COREI7 CPU, WINDOW-10- SSD,16 GB RAM, and 2070 NVIDIA GPU. Two experiments are performed for the proposed method evaluation. CGAN model performance is computed in the first experiment. Similarly, in the second experiment, classification is performed on synthetically generated images using the CGAN model. The classification results are analyzed using two models such as CML and QNN.
Experiment #1: Synthetic Image Generation Using CGAN
To assess the efficiency of the modified CGAN model, synthetic CT images are produced. The model is tuned to reduce generative loss while increasing prediction accuracy. The generative loss function is mathematically explained below.
The goal of the generator is to generate and supply data to the discriminator model as real images. The maximum image probability to the generator is considered real by reducing the negative log function.
Given discriminator output Y as.
\(\widehat{Y}=\sigma (Y)\) represent the input probability that belongs to the real class.
\(1-\widehat{Y}\) denotes input probability that belongs to the generated class.
The generator loss function is expressed mathematically as
\(loss generator=-mean(log({\widehat{Y}}_{Generated})\) where \({\widehat{Y}}_{Generated}\) define output discriminator that shows the probability of the generated synthetic images.
The discriminator’s goal is not tricked through a generator. The discriminator probability successfully increased the training images and minimized the negative log function. The discriminator loss function is expressed mathematically as
where \({\widehat{Y}}_{Real}\) shows the output of the discriminator probability for input real images.
The generated images with generator loss as well as discriminator prediction scores are shown in Fig. 6.
The quantitative analysis of the predicted discriminator’s scores and generative loss is shown in Table 5.
The synthetic generated images are presented in Fig. 7.
Some synthetic generated images concerning the class labels as displayed in Fig. 8.
Experiment #2: Classification Using Classical Machine Learning and Quanvolutional Neural Network Model
In Experiment #2, classification results are computed using a variety of measures, i.e., accuracy (AC), precision (Pn), Recall (Rl), and F1-score (Fe) as depicted in Tables 6, 7, 8, 9, 10, 11, 12 and 13. In terms of confusion metrics seen in Fig. 9, the proposed solution output is plotted.
Classification Results Based on Classical Machine Learning
Tables 6, 7, 8 and 9 display the classification effects of the CML model on the benchmark datasets using 0.4&0.5 cross-validation.
Results on Publically Available (Chinese Hospital Dataset)
The classification results on UCSD-AI4H/COVID-CT dataset are displayed in Tables 6 and 7. Table 6 shows the results using 0.4 cross-validations on UCSD-AI4H/COVID-CT dataset.
The classification results of the CML model on 0.5 cross-validations are depicted in Table 7.
Results on Private Collected (POF Hospital Dataset)
The computed outcomes on POF Hospital dataset are given in Tables 8 and 9. The classification results of the CML model are mentioned in Table 8 using 0.4 cross-validations on the POF Hospital dataset.
In the same experiment, classification results of the CML model are also computed on 0.5 cross-validations as given in Table 9.
Classification Results based on Quanvolutional Neural Network
The COVID-19 classification is performed using the QNN model as presented in Tables 10, 11, 12 and 13 using 0.4 and 0.5 cross-validations.
Results on Publically Available (Chinese Hospital Dataset)
The classification results using the QNN model on the UCSD-AI4H/COVID-CT dataset are mentioned in Tables 10 and 11. Table 10 present the classification results of the QNN model using 0.4 cross-validations.
The QNN model performance is also computed using 0.5 cross-validations on the UCSD-AI4H/COVID-CT dataset in Table 11.
Results on Private Collected (POF Hospital Dataset)
Tables 12 and 13 display the classification findings. The classification outcomes on POF Hospital dataset using 0.4 cross-validations are depicted in Table 12.
Table 13 displays the classification findings computed using 0.5 cross-validations.
Results Comparison of the Proposed Classical Machine Learning and Quanvolutional Neural Network
The performance of the proposed CML and QNN models are graphically shown in Fig. 10, where classification accuracy is presented with quantum (QNN-model) and without quantum (CML-model).
Figure 10 presents the classification results using CML and QNN models using 0.4 and 0.5 cross-validation on the benchmark datasets, where the red line and blue line show the results of quantum (QNN model) and without quantum layers (CML-model) respectively.
In this article, lung CT images are classified using two architectures such as CML and QNN. Experimental outcomes manifested that QNN performed better on both datasets as compared to the CML. The findings of the proposed approach are compared to the five most recent methodologies given in Table 14.
The existing methods utilized transfer learning models such as ResNet-50, VGG-16, VGG-19, Xception, Inception ResNet, Inceptionv3, NasNet large, DenseNet121, ResNet50v2, Inception, and DenseNet169 for classification. The existing method achieved a maximum of 0.87 Fe; however, the proposed QNN model achieved 0.96 Fe.
Conclusion
COVID-19 detection is a challenging task because of the limited data availability and complex images features. To handle this issue, a modified CGAN model is proposed for synthetic CT image generation. This provides accurate results of image classification due to a better model training/testing. Result classification depends on the extracted features. In this research, two models are proposed such as CML and QML. The CML model achieved maximum accuracy of 0.85 on the UCSD-AI4H/COVID-CT dataset and 0.95 on the POF Hospital dataset, while the QML model achieved maximum accuracy of 0.96 on the UCSD-AI4H/COVID-CT dataset and 1.00 on the POF Hospital dataset. Overall experimental results conclude that the QML model performed better as compared to the CML model, which proves that quantum machine learning provided superior performance. In the future, this study can be extended to classify the severity of the COVID-19 patients such as mild, moderate, and severe to increase the survival rate of the patients.
References
Machhi J, Herskovitz J, Senan AM, Dutta D, Nath B, Oleynikov MD, et al. The natural history, pathobiology, and clinical manifestations of SARS-CoV-2 infections. J Neuroimmune Pharmacol. 2020;21:1–28.
MacLean OA, Lytras S, Weaver S, Singer JB, Boni MF, Lemey P, et al. Natural selection in the evolution of SARS-CoV-2 in bats, not humans, created a highly capable human pathogen. BioRxiv. 2020.
Marvel SW, House JS, Wheeler M, Song K, Zhou YH, Wright FA, et al. The COVID-19 Pandemic Vulnerability Index (PVI) Dashboard: Monitoring county-level vulnerability using visualization, statistical modeling, and machine learning, Accessed by https://covid19.who.int/?gclid=CjwKCAiAgJWABhArEiwAmNVTBxT9Yp12C6zDLVWxYOAefSlJb4FO-iRrgXZshtiWMSGfD5FNAqjoTxoC13gQAvD_BwE, Environ Health Perspect.
Jin YH, Cai L, Cheng ZS, Cheng H, Deng T, Fan YP, et al. A rapid advice guideline for the diagnosis and treatment of 2019 novel coronavirus (2019-nCoV) infected pneumonia (standard version). Mil Med Res. 2020;7:4.
Rothan HA, Byrareddy SN. The epidemiology and pathogenesis of coronavirus disease (COVID-19) outbreak. J Autoimmun. 2020;p. 102433.
Ai T, Yang Z, Hou H, Zhan C, Chen C, Lv W, et al. Correlation of chest CT and RT-PCR testing in coronavirus disease 2019 (COVID-19) in China: a report of 1014 cases, Radiology. 2020;p. 200642.
Wujtewicz M, Dylczyk-Sommer A, Aszkiełowicz A, Zdanowski S, Piwowarczyk S, Owczuk R. COVID-19–what should anaethesiologists and intensivists know about it? Anaesthesiology intensive therapy. 2020;52:34–41.
Romagnoli S, Peris A, De Gaudio AR, Geppetti P. SARS-CoV-2 and COVID-19: from the bench to the bedside. Physiol Rev. 2020;100:1455.
Shi F, Wang J, Shi J, Wu Z, Wang Q, Tang Z, et al. Review of artificial intelligence techniques in imaging data acquisition, segmentation and diagnosis for covid-19, IEEE Rev Biomed Eng. 2020.
Yasmin M, Sharif M, Irum I, Mehmood W, Fernandes SL. Combining multiple color and shape features for image retrieval. IIOAB J. 2016;7:97–110.
Nida N, Sharif M, Khan MUG, Yasmin M, Fernandes SL. A framework for automatic colorization of medical imaging. IIOAB J. 2016;7:202–9.
Amin J, Sharif M, Yasmin M, Ali H, Fernandes SL. A method for the detection and classification of diabetic retinopathy using structural predictors of bright lesions. Journal of Computational Science. 2017;19:153–64.
Shah JH, Chen Z, Sharif M, Yasmin M, Fernandes SL. A novel biomechanics-based approach for person re-identification by generating dense color sift salience features. Journal of Mechanics in Medicine and Biology. 2017;17:1740011.
Fatima Bokhari ST, Sharif M, Yasmin M, Fernandes SL. Fundus image segmentation and feature extraction for the detection of glaucoma: a new approach. Current Medical Imaging. 2018;vol. 14, pp. 77–87.
Naqi S, Sharif M, Yasmin M, Fernandes SL. Lung nodule detection using polygon approximation and hybrid features from CT images. Current Medical Imaging. 2018;14:108–17.
Amin J, Sharif M, Yasmin M, Fernandes SL. Big data analysis for brain tumor detection: Deep convolutional neural networks. Futur Gener Comput Syst. 2018;87:290–7.
Amin J, Sharif M, Yasmin M, Fernandes SL. A distinctive approach in brain tumor detection and classification using MRI. Pattern Recogn Lett. 2017.
Amin J, Sharif M, Yasmin M, Saba T, Anjum MA, Fernandes SL. A new approach for brain tumor segmentation and classification based on score level fusion using transfer learning. J Med Syst. 2019;43:1–16.
Muhammad N, Sharif M, Amin J, Mehboob R, Gilani SA, Bibi N, et al. Neurochemical alterations in sudden unexplained perinatal deaths—a review. Front Pediatr. 2018;6:6.
Amin J, Sharif M, Yasmin M, Saba T, Raza M. Use of machine intelligence to conduct analysis of human brain data for detection of abnormalities in its cognitive functions. Multimed Tools Appl. 2020;79:10955–73.
Sharif M, Amin J, Siddiqa A, Khan HU, Malik MSA, Anjum MA, et al. Recognition of different types of leukocytes using YOLOv2 and optimized bag-of-features. IEEE Access. 2020;8:167448–59.
Amin J, Sharif M, Anjum MA, Khan HU, Malik MSA, Kadry S. An integrated design for classification and localization of diabetic foot ulcer based on CNN and YOLOv2-DFU models. IEEE Access. 2020.
Ieracitano C, Mammone N, Hussain A, Morabito FC. A novel explainable machine learning approach for EEG-based brain-computer interface systems. Neural Comput Applic. 2021;pp. 1–14.
Wang S, Kang B, Ma J, Zeng X, Xiao M, Guo J, et al. A deep learning algorithm using CT images to screen for Corona Virus Disease (COVID-19). MedRxiv. 2020.
Yadav SS, Jadhav SM. Deep convolutional neural network based medical image classification for disease diagnosis. J Big Data. 2019;6:113.
Nguyen G, Dlugolinsky S, Bobák M, Tran V, García ÁL, Heredia I, et al. Machine Learning and Deep Learning frameworks and libraries for large-scale data mining: a survey. Artif Intell Rev. 2019;52:77–124.
Bandyopadhyay S, DUTTA S. Detection of fraud transactions using recurrent neural network during COVID-19. 2020.
Jiang Y, Chen H, Loew M, Ko H. COVID-19 CT image synthesis with a conditional generative adversarial network. arXiv preprint arXiv:2007.14638. 2020.
Abbas A, Abdelsamea MM, Gaber M. 4S-DT: self supervised super sample decomposition for transfer learning with application to COVID-19 detection, arXivpreprint arXiv:2007.11450. 2020.
Cheng Y, Zhao X, Huang K, Tan T. Semi-supervised learning and feature evaluation for RGB-D object recognition. Comput Vis Image Underst. 2015;139:149–60.
Xin B, Peng W. Prediction for chaotic time series-based AE-CNN and transfer learning. Complexity. vol. 2020, 2020.
Sahiner B, Pezeshk A, Hadjiiski LM, Wang X, Drukker K, Cha KH, et al. Deep learning in medical imaging and radiation therapy. Med Phys. 2019;46:e1–36.
Rajinikanth V, Dey N, Raj ANJ, Hassanien AE, Santosh K, Raja N. Harmony-search and otsu based system for coronavirus disease (COVID-19) detection using lung CT scan images. arXiv preprint arXiv:2004.03431. 2020.
Mishra BK, Keshri AK, Rao YS, Mishra BK, Mahato B, Ayesha S, et al. COVID-19 created chaos across the globe: three novel quarantine epidemic models. Chaos, Solitons & Fractals. 2020;p. 109928.
Yang GZ, Nelson BJ, Murphy RR, Choset H, Christensen H, Collins SH, et al. Combating COVID-19—the role of robotics in managing public health and infectious diseases. ed: Sci Robot. 2020.
Nguyen TT. Artificial intelligence in the battle against coronavirus (COVID-19): a survey and future research directions. Preprint, DOI, 2020;vol. 10.
Yu K-H, Beam AL, Kohane IS. Artificial intelligence in healthcare. Nature biomedical engineering. 2018;2:719–31.
Vishnuvarthanan A, Rajasekaran MP, Govindaraj V, Zhang Y, Thiyagarajan A. An automated hybrid approach using clustering and nature inspired optimization technique for improved tumor and tissue segmentation in magnetic resonance brain images. Appl Soft Comput. 2017;57:399–426.
Toğaçar M, Ergen B, Cömert Z. COVID-19 detection using deep learning models to exploit Social Mimic Optimization and structured chest X-ray images using fuzzy color and stacking approaches. Comput Biol Med. 2020;p. 103805.
Nour M, Cömert Z, Polat K. A novel medical diagnosis model for COVID-19 infection detection based on deep features and Bayesian optimization. Appl Soft Comput. 2020;p. 106580.
Ozturk T, Talo M, Yildirim EA, Baloglu UB, Yildirim O, Acharya UR. Automated detection of COVID-19 cases using deep neural networks with X-ray images. Comput Biol Med. 2020;p. 103792.
Dunjko V, Briegel HJ. Machine learning & artificial intelligence in the quantum domain: a review of recent progress. Rep Prog Phys. 2018;vol. 81, p. 074001.
Ciliberto C, Herbster M, Ialongo AD, Pontil M, Rocchetto A, Severini S, et al. Quantum machine learning: a classical perspective. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences. 2018;474:20170551.
Schuld M. Quantum machine learning for supervised pattern recognition. 2017.
Anthimopoulos M, Christodoulidis S, Ebner L, Christe A, Mougiakakou S. Lung pattern classification for interstitial lung diseases using a deep convolutional neural network. IEEE Trans Med Imaging. 2016;35:1207–16.
Kassal I, Whitfield JD, Perdomo-Ortiz A, Yung M-H, Aspuru-Guzik A. Simulating chemistry using quantum computers. Annu Rev Phys Chem. 2011;62:185–207.
Dunjko V, Taylor JM, Briegel HJ. Quantum-enhanced machine learning. Phys Rev Lett. 2016;vol. 117, p. 130501.
Radford A, Metz L, Chintala S. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434. 2015.
Henderson M, Shakya S, Pradhan S, Cook T. Quanvolutional neural networks: powering image recognition with quantum circuits. Quantum Machine Intelligence. 2020;2:1–9.
Bergholm V, Izaac J, Schuld M, Gogolin C, Alam MS, Ahmed S, et al. Pennylane: automatic differentiation of hybrid quantum-classical computations. arXiv preprint arXiv:1811.04968. 2018.
Bergholm V, Izaac J, Schuld M, Gogolin C, Alam MS, Ahmed S, et al. PennyLane: automatic differentiation of hybrid quantum-classical computations. 2018, https://arxiv.org/abs/1811.04968 v3. Accessed on Thu. 2020.
Zhao J, Zhang Y, He X, Xie P. Covid-ct-dataset: a ct scan dataset about covid-19. 2020.
Yang X, He X, Zhao J, Zhang Y, Zhang S, Xie P. COVID-CT-dataset: a CT scan dataset about COVID-19. ArXiv e-prints, p. arXiv: 2003.13865. 2020.
Horry MJ, Chakraborty S, Paul M, Ulhaq A, Pradhan B, Saha M, et al. COVID-19 detection through transfer learning using multimodal imaging data. IEEE Access. 2020;8:149808–24.
Burgos-Artizzu XP. Computer-aided covid-19 patient screening using chest images (X-Ray and CT scans). medRxiv. 2020.
Wang Z, Liu Q, Dou Q. Contrastive cross-site learning with redesigned net for COVID-19 CT classification. IEEE J Biomed Health Inform. 2020;24:2806–13.
Ewen N, Khan N. Targeted self supervision for classification on a small COVID-19 CT scan dataset. arXiv preprint arXiv:2011.10188. 2020.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Ethical Approval
This article does not contain any studies with human participants or animals performed by any of the authors.
Conflict of Interest
The authors declare no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Amin, J., Sharif, M., Gul, N. et al. Quantum Machine Learning Architecture for COVID-19 Classification Based on Synthetic Data Generation Using Conditional Adversarial Neural Network. Cogn Comput 14, 1677–1688 (2022). https://doi.org/10.1007/s12559-021-09926-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12559-021-09926-6