Nothing Special   »   [go: up one dir, main page]

Skip to main content
Springer Nature - PMC COVID-19 Collection logoLink to Springer Nature - PMC COVID-19 Collection
. 2022 Jun 8;78(17):18598–18615. doi: 10.1007/s11227-022-04595-0

Angle prediction model when the imaging plane is tilted about z-axis

Zheng Fang 1, Bichao Ye 1, Bingan Yuan 1, Tingjun Wang 1, Shuo Zhong 1, Shunren Li 2, Jianyi Zheng 1,
PMCID: PMC9175174  PMID: 35692867

Abstract

Computer Tomography (CT) is a complicated imaging system, requiring highly geometric positioning. We found a special artifact caused by detection plane tilted around z-axis. In short scan cone-beam reconstruction, this kind of geometric deviation result in half circle shaped fuzzy around highlighted particles in reconstructed slices. This artifact is distinct near the slice periphery, but deficient around the slice center. We generated mathematical models, and InceptionV3-R deep network to study the slice artifact features to estimate the detector z-axis tilt angle. The testing results are: mean absolute error of 0.08819 degree, the Root mean square error of 0.15221 degree and R-square of 0.99944. A geometric deviation recover formula was deduced, which can eliminate this artifact efficiently. This research enlarges the CT artifact knowledge hierarchy, and verifies the capability of machine learning in CT geometric deviation artifact recoveries.

Keywords: CT, Artifact, Machine learning, Geometric deviation, Cone-beam, InceptionV3-R

Introduction

The term artifact is applied to any systematic discrepancy between the CT numbers in the reconstructed image and the true attenuation coefficients of the object. CT images are inherently more prone to artifacts than conventional radiographs. There are many kinds of CT imaging artifacts: streaking, shading, ring and band, geometric distortion, and so on [1].

The cone beam CT(CBCT) circular trajectory scanning system is mainly composed of X-ray source, rotary part and flat-panel detector. The geometric deviation of the system is mainly composed of these three parts. The conical beam CT system mainly adopts FDK algorithm to carry out three-dimensional reconstruction of objects [2]. An ideal reconstruction geometric model requires that the center of the ray source, the center of the rotary table and the detector center be collinear, and that the center line is perpendicular to the detector plane, and the rotation axis should be parallel to the detector plane [3]. In CT actual mechanical installation, the geometric deviations mismatch ideal geometric model, and geometrically relative artifacts seriously affect the of CT images. Because the focus and rotation axis of the ray source are not physically visible, it is difficult to realize the ideal geometric relationship model, so it will appear as geometric artifacts in the reconstructed image [4].

There are many practical geometric correction methods for CT reconstruction, they can be divided into analytic and iterative ones [5]. The analytical methods usually take some ideal conditions as the premise, only calibrates some parameters of the system, so as to reduce the complexity of the problem. The traditional analytic geometry correction algorithm needs to make a calibration phantom with high accuracy, and use the projection of the phantom to calculate the CT performance geometric parameters. The iterative methods utilize the quality of the reconstructed image as the standard, and uses the optimization algorithm to correct the geometric deviations. It has high accuracy and does not need to make a calibration phantom. However, there are some problems such as falling into local solution, low efficiency, and initial value selection.

With the advantages of strong practicability, simplicity and high efficiency, analytic methods have become the mainstream in CT reconstructions. Noo et al. proposed a new method requiring a small set of measurements of a simple calibration object consisting of two spherical objects. The calibration geometry can be determined analytically using explicit formulas. The method is robust and easy to implement [6]. Based on the research of Noo, Smekal et al. presented a high-precision method for the geometric calibration in cone-beam computed tomography. It was based on a Fourier analysis of the projection orbit data, which recorded with a flat-panel detector of individual point-like object. For circular scan trajectories the complete set of misalignment parameters which determine the deviation of the detector alignment from the ideal scan geometry are obtained [7]. Cho et al. developed a general analytic algorithm and corresponding calibration phantom for estimating these geometric parameters in cone-beam computed tomography systems. The algorithm makes use of a calibration phantom consisting of 24 steel ball bearings in a known geometry. The method estimates geometric parameters including the position of the X-ray source, and rotation center of the detector, and gantry angle, and can describe complex source-detector trajectories [8]. Li et al. Proposed an annealing procedure minimizes the cost function that associates with the geometrical parameters and the convergence of the ball bearings back-projections from various viewing angles, specifically, six geometric parameters can be directly obtained [9].

Some researchers concern on the iterative methods. Kingston et al. used the sharpness value of reconstructed CT image as a cost function to find the geometric parameter values [10]. Meng et al. deduced an objective function to illustrate the dependence of the symmetry of the sum of projections on geometric parameters, which will converge to its minimum when the geometric parameters achieve their true values [11]. This method requires no calibration phantom and can be used in circular trajectory cone-beam CT with arbitrary cone angles.

Radiology is commonly used in medical imaging and Non-destructive testing (NDT). Machine learning is also applied in this region. Lakhani evaluated the efficacy of deep convolutional neural networks (DCNNs) in differentiating image details in radiography [12]. Oviedo et al. proposed a machine learning approach to predict crystallographic dimensionality from a limited number of thin-film XRD patterns [13]. Souza et al. demonstrated in the lung segmentation method that the problem of dense abnormalities in chest X-rays can be efficiently addressed by performing a reconstruction step based on a DCNN model [14]. Coronavirus disease (COVID-19) has quickly become a global pandemic since it was first reported in December 2019. Some researchers are utilizing transfer learning with a deep CNN-based COVID-19 screening in chest X-ray to identify efficient transfer learning strategies [15, 16].

This study focused on circular trajectory, cone-beam, short-scan CT imaging, and plate yaw Angle (z-axis) misalignment. When axial direct imaging is performed on the columnar sample, a special asymmetrical semicircle artifact appears around each columnar sample. The higher yaw, the worse artifact. The artifact increases with the distance from the slice center. How to calculate the detector deflection Angle of CBCT system quickly and efficiently is the focus of this paper. In this study, geometric artifact features in reconstructed images will be automatically extracted by inceptionV3-R algorithm, and the mathematical relationship between feature position and detector deflection Angle will be calculated. Finally, the deflection Angle of detector can be predicted according to reconstructed image.

The mathematical simulation program generated hundreds of cylindrical samples in which the wires were directly stretched in the axial direction. ASTRA Toolbox [17] was used for cone-beam projection, and the tilted detector projection data obtained through geometric coordinate transformation. In this paper, a cone beam filtering method is used to reconstruct the oblique projection of the detector. Slices were divided into two groups: the training database and the test database. Using the training database to train the artificial neural network, the optimal parameters of the designed network are obtained. The accuracy of inclination estimation is verified by test database.

The experiment design

Figure 1 shows our workflow flowchart. Three-dimensional simulation models are 500 different cylindrical digital phantoms. In order to create CT image characters as much as possible, many metal wires buried in the phantoms axially. The ideal projected data of cone-beam X-ray were simulated by using ASTRA Toolbox. Then detector deflection projection at ± 10°, ± 8°, ± 6°, ± 4°, ± 2° and 0° were obtained by coordinate transformation. The transformed projection data are reconstructed by FDK algorithm, and a total of 5500 slice images under the detector's deflection state are obtained. The image resolution is 512 × 512.

Fig. 1.

Fig. 1

Flowchart of Angle prediction experiment

Data preparation

Each 3D digital phantom is cylinder shaped with embedded wires along axial direction. All its slice images are uniform. So, the key work of generating the digital phantom is to produce its cross-sectional image. The slices contain 512×512 pixels, with some wires with cross-sectional sizes of 2×2 to 8×8 pixels. There were 100 wires in the slice, scattered randomly among the phantom entities. When the detector is tilted around the z-axis, an inhomogeneous artifact appears in the reconstructed slice, as shown in Fig. 2. It is easy to determine the direction of the tilt (positive or negative) by looking at the reconstructed slice. The strength of artifact increases with the increase in tilt Angle. To eliminate the artifact of incompletely back projection, the reconstructed area is only the internally tangent circle of the square slice.

Fig. 2.

Fig. 2

Artifact demonstration of detector z-axis tilt in different angles, a–e are − 8, − 4, 0, 4 and 8 degrees, respectively

The reconstructed section image with detector deflection at 0° is shown in Fig. 4.

Fig. 4.

Fig. 4

Reconstructed image of correct geometry

After obtaining the ideal projection data, according to the specific Angle of detector deflection, the ideal projection data can be transformed into the projection data of detector deflection by geometric relation formula. The geometric relationship of detector deflection is shown in Fig. 3.

Fig. 3.

Fig. 3

Geometric top view of X-ray projection

Figure 3 shows a top view of the X-ray projection.S represents the position of the cone-beam X-ray source. The ideal state where the detector does not deflect is the l plane, where O represents the center point of the detector plane. The detector rotates α in the direction of the central column to the plane l, where the planar graphs l and l intersect at O. Suppose a ray k, drawn in red, intersects l at B and l at A. We can draw a vertical line from A that intersects l at C. Here we define the length of line segment OA as x, and the length of line segment OB as x. The Angle between d and k is γ. The length of line segment AC can be calculated by the following formula:

LAC=x·sinα 1

The length of line segment BC is:

LBC=x·sinα·tanγ 2

tanγ can be expressed as:

tanγ=xd 3

From OC=OB+BC, the following equation can be obtained:

xcosα=x+x·sinα·tanγ 4

By integrating the above four formulas, the ideal projection can be transformed into the projection data when the detector deflection Angle is α:

x=xcosα-x·sinαd 5

Based on Fig. 1, we can deduce the distorting projection recover formula from Eq. (5).

x=x·d·cosαd+x·sinα 6

According to Eq. (5), the projection data of detector deflection is obtained, and then the slice with geometric artifacts is reconstructed by FDK algorithm. 11 reconstructed images of one phantom at different detector deflection angles are shown in Figs. 4, 5, 6, 7, 8 and 9.

Fig. 5.

Fig. 5

Reconstructed images of detector deflections of 2° (a) and − 2° (b)

Fig. 6.

Fig. 6

Reconstructed images with detector deflections of 4° (a) and − 4° (b)

Fig. 7.

Fig. 7

Reconstructed images with detector deflections of 6° (a) and − 6° (b)

Fig. 8.

Fig. 8

Reconstructed images with detector deflections of 8° (a) and − 8° (b)

Fig. 9.

Fig. 9

Reconstructed images with detector deflections of 10° (a) and − 10° (b)

After collecting 5500 reconstructed images, 4000 of them were input into LR, SVR and InceptionV3-R as a training set to train the model, 400 were divided into validation sets to adjust and optimize model parameters, and the remaining 1100 were used as test set to evaluate the performance of the model. The normalized processing of the divided data sets can limit all the data within the range of [0,1] and improve the solving efficiency of the model.

Data preprocessing

The main purpose of this study is to predict the deflection Angle of the detector by reconstructing the image of the object to be measured when the detector is deflected around its plane center column. Based on the above purposes, three operations need to be done: image data preprocessing, Angle prediction algorithm building and prediction accuracy evaluation analysis of different models. The operation flowchart is shown in Fig. 10.

Fig. 10.

Fig. 10

Reconstruction of image data processing flowchart

Angle prediction model

Linear regression

The basic principle is that the input parameter X=(x1,x2,x3xn) can be weighted to obtain the target value y^:

y^ω,x=ω0+ω1x1++ωnxn 7

On the type of ω=(ω0,ω1,,ωn) is the parameter of the linear regression model and represents the weight of each input item. The process of model training is essentially the process of solving the parameter ω, and the commonly used Method is Least Square Method (LSM).

Support vector regression

Support Vector Regression (SVR) is a Regression method based on Machine learning Support Vector Machine (SVM). By finding the hyperplane that minimizes the total deviation of all samples, the data is fitted near this plane. The purpose of support vector regression [17] is to find the optimal hyperplane fx=ωTx+b, so that the deviation between the true value y and the predicted value f(x) satisfies the following equation:

y-fx=y-ωTϕx-b<ε 8

In the formula ϕ(x) represents the kernel function, and ε represents the manual designed deviation upper limit.

Convolutional neural network (CNN)

Since the data we deal with are a single channel gray scale map, we choose to use Convolutional Neural Network (CNN). It is mainly composed of two convolution layers and pooling layers. The superposition of convolution layer and pooling layer realizes feature extraction of input data. Finally, the full connected layer and Sigmoid are connected to achieve regression task. The model structure is shown in Fig. 11.

Fig. 11.

Fig. 11

Diagram of InceptionV3-R

InceptionV3-R

Inception-V3 is a CNN for auxiliary image analysis and target detection. The large-scale convolution is decomposed into many small-scale convolutions to reduce the computation. Asymmetric convolution in different directions is used to generate features with low correlation (orthogonal) and accelerate the convergence rate of training. Its high efficiency makes it widely used in image processing, mainly for classification [1820]. The estimation of deflection Angle is a regression problem. By modifying the activation function of the final output layer, the DL model used for image classification is transformed into a regression model. We improve the traditional model and add three fully connected networks in the output. The output layer is reduced to one dimension by the sigmoid activation function.

Figure 12 shows the flowchart of estimating the inclination Angle of the imaging plane using reconstructed images. The Relu activation function was added after each convolutional layer to increase the nonlinearity of the model. The main idea of Inception module is to approximate the optimal local sparse structure using dense components [21]. Compared with previous models, it increases the depth and width of the model, and reduces the number of parameters and computation. Name the modified network as InceptionV3-R, as shown in Fig. 13.

Fig. 12.

Fig. 12

Regression flowchart of detector deflection Angle

Fig. 13.

Fig. 13

Diagram of InceptionV3-R, Inception Model I(IM I), Inception Model II(IM II), Inception Model III(IM III)

The structure of InceptionV3-R is shown in Table 1, and its input is a 512*512 (pixel) reconstructed image.

Table 1.

InceptionV3-R structure

The process Kernel/Stride Note
Conv 3*3/2 Input reconstruction image
Conv 3*3/1 Feature extraction from reconstructed images
Conv 3*3/1
Pool 3*3/2
Conv 1*1/1
Conv 3*3/1
Pool 3*3/2
3*Inception Model I As shown in figure
5*Inception Model II As shown in figure
2*Inception Model III As shown in figure
Pool 8*8/1
Pool Global Average Pooling Global average pooling of image features is carried out
Sigmoid Regression Output prediction Angle

The output of InceptionV3-R algorithm adopts Sigmoid function, which can convert any number into a number between 0 and 1, limiting the range of output. Its expression is as follows:

Sigmoidx=11+e-x 9

Experimental evaluation

Evaluation indicators

After the training and debugging of LR, SVR and InceptionV3-R regression algorithms, the models were evaluated and compared on the test set. Evaluation indicators include MSE, MAE, MAXE, and R2-Score. The label value of the test set and the predicted output value of the model are y and y^, respectively, the expressions of MSE, MAE, max-error, and R2-Score are as follows:

MSEy,y^=1mi=1myi-yi^2 10
MAEy,y^=1mi=1myi-yi^ 11
Max - errory,y^=maxyi-yi^ 12
R2y,y^=1-i=1nyi-yi^2i=1nyi-y¯
s.t.y¯=1ni=1nyi· 13

MSE and Max-error show the prediction error of the model from two different directions. MAE represents the difference between the predicted value and the true value. Determination coefficient R2-score is also known as goodness of fit. The closer the value is to 1, the higher the fitting accuracy is.

Data visualization analysis

The experimental environment is shown in Table 2.

Table 2.

Experimental environment

Title Parameter
CPU Intel Core i7-10700F
GPU GeForce RTX3060
RAM 32G
Code Builder TensorFlow2.5, Python3.9

In the above experimental environment, LR, SVR, CNN and InceptionV3-R regression algorithms were trained, and the performance of each model was evaluated according to MSE, MAE, Max-error and R2-score. As shown in Table 3, InceptionV3-R has the best effect among the three algorithms according to the comprehensive evaluation index R2-score, and the maximum error is only 0.9324°. The loss curve during InceptionV3-R and CNN training is shown in Fig. 14. As is seen from Fig. 14, with the increase in epoch, both train loss and verification loss decrease to about 3.6×10-5. Table 4 shows the run time of the four models.

Table 3.

Evaluation indexes of the four algorithms

Evaluation indicators
Model MSE MAE Max-error R2-score
LR 13.6207 3.1123 9.3878 0.6595
SVR 26.3324 4.442 10.6636 0.3417
CNN 2.6875 1.3528 5.6608 0.9328
InceptionV3-R 0.04326 0.14 0.9324 0.9989

Fig. 14.

Fig. 14

Visualization of Inception V3-R and CNN training: Loss curve

Table 4.

Model run time of the four algorithms

Model Time (s)
LR 193.1677
SVR 215.2551
CNN 10,572.9427
Inception V3-R 14,520.8964

As shown in Fig. 15, in order to display the performance of each model more intuitively, the difference between the real value and the predicted value of the data of 1100 test sets was calculated. The red dots in the figure represent the difference value of InceptionV3-R, and the LR model represented by blue basically completely covers the red. The black dots represent the difference of the CNN model, which is between blue and red. It can be observed that the SVR model represented by green has the largest error coverage, indicating that the SVR model has the worst effect, which is consistent with all evaluation indicators in Table 3.

Fig. 15.

Fig. 15

Scatter plot of the difference between the predicted value and the real value of the four algorithms

The trained model only took 30 s to predict the deflection Angle of 1100 test images, and the absolute difference between the predicted value and the real value of 1077 images was no more than 0.5°. Figures 16 and 17 show the number of images in different error ranges predicted by the two neural network models. From Fig. 16, about 53% of the images whose absolute value difference between predicted value and real value is within the range of 0.1°, and the number of images whose absolute value difference no more than 0.5° account for 98% of the whole test set. It is seen from Fig. 17 that absolute value difference between predicted value and real value predicted by CNN model is mainly distributed between 1 and 3. For the reconstructed images without training, inceptionV3-R model can effectively predict the deflection Angle of the detector in the 500 phantoms embedded with metal wires, indicating that the model has well learned the relationship between geometric artifacts in the reconstructed images and the deflection Angle of the detector.

Fig. 16.

Fig. 16

Histogram of deflection prediction error of Inception V3-R

Fig. 17.

Fig. 17

Histogram of deflection prediction error of CNN

Figure 18 shows the average prediction errors corresponding to different deflection angles in 1100 test sets. It can be seen from the figure that the minimum prediction error occurs when the deflection angle is ± 10°. 0° means that the detector has no deflection, so there is no geometric artifact in the reconstructed image. Compared with other deflection angles, the wire features in the reconstructed image are most clear when the detector is deflected at 0°. Although geometric artifacts are most obvious when the deflection angle is ± 10°, the feature area in the reconstructed image is the largest.

Fig. 18.

Fig. 18

Histogram of average prediction errors corresponding to different detector declination angles

Result

The maximum error is 0.9324°, and 98% of the test set’s error are no more than 0.5°. Therefore, we performed corrective reconstruction of the phantom based on this result. The reconstruction results are shown in Fig. 19. Geometric artifacts are nearly invisible in the reconstructed image at the maximum error Angle.

Fig. 19.

Fig. 19

Image plane z-axis inclined section recovery: a the reconstructed section with an inclination of 8 degrees, b the reconstructed section with full Angle restoration, c the reconstructed section with Angle restoration with an error of 0.5 degrees, and d the reconstructed section with Angle restoration with an error of 0.9324 degrees

Discussion

The image plane may tilt in three directions: pitch, roll and yaw (around z axis), respectively. When the tilt was around x or y axis, there are not obvious artifacts. These two kinds of geometry misalignment only lead to imaging distorting instead of fuzzy artifacts. It is hard to judge whether the imaging plane was tilt around x or y axis only by observing the reconstructed slices.

In terms of accuracy of prediction, inceptionV3-R method used in this study is superior to traditional linear regression and support vector regression, but the computational cost is also higher in network training. It takes 14,520.8964 s for model training and 75.2644 s to run the model prediction test set. When the features captured by the network model are greatly reduced, it affects training of the network, which is also a disadvantage of the neural network.

Iterative method can also solve this problem. While it needs perform reconstruction continually to approximate the optimized recovery, requiring a larger number of arithmetic operations. When the z-axis tilt angle is less than 1 degree, the artifact will be too slight to be found by observation. When the tilt angle is less than 0.5-degree, computer can detect little reconstructed differences between upright detector images and tilt detector ones.

Conclusion

In this paper, we study a new CT artifact and its correction method using machine learning tools. At present, the mainstream methods can be classified into two categories: calibration method and iterative approximation method. This method uses machine learning to estimate geometric deviation and then recover it, which is an innovative strategy to solve geometric deviation artifact. In our simulation experiment, InceptionV3-R model was used to deal with regression problems, MAE was 0.08819 degree, RMSE was 0.15221 degree and R2-score was 0.99944. It is feasible to evaluate and correct CT images by machine learning.

Acknowledgements

We would like to thank for the support of National Natural Science Foundation of China (61571381) and Double-Hundred Project for the Introduction of Xiamen Elite (Special Talent 2019.1). Thanks for the suggestions from Ge Wang in Center for Biotechnology and Interdisciplinary Studies, Rensselaer Polytechnic Institute.

Data Availability

Research data are not shared.

Footnotes

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Barrett JF, Keat N. Artifacts in CT: recognition and avoidance. Radiographics. 2004;24(6):1679–1691. doi: 10.1148/rg.246045065. [DOI] [PubMed] [Google Scholar]
  • 2.Sun Y, Hou Y, Hu J. Reduction of artifacts induced by misaligned geometry in cone-beam CT. IEEE Trans Biomed Eng. 2007;54(8):1461–1471. doi: 10.1109/TBME.2007.891166. [DOI] [PubMed] [Google Scholar]
  • 3.Guo J, Vidal V, Baskurt A et al. (2015) Evaluating the local visibility of geometric artifacts[C]. In: Proceedings of the Acm Siggraph Symposium on Applied Perception pp. 91–98.
  • 4.Baek J, Pelc NJ. Local and global 3D noise power spectrum in cone-beam CT system with FDK reconstruction. Med Phys. 2011;38(4):2122–2131. doi: 10.1118/1.3556590. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Geyer LL, Schoepf UJ, Meinel FG, et al. State of the art: iterative CT reconstruction techniques. Radiology. 2015;276(2):339–357. doi: 10.1148/radiol.2015132766. [DOI] [PubMed] [Google Scholar]
  • 6.Noo F, Clackdoyle R, Mennessier C, et al. Analytic method based on identification of ellipse parameters for scanner calibration in cone-beam tomography. Phys Med Biol. 2000;45(11):3489. doi: 10.1088/0031-9155/45/11/327. [DOI] [PubMed] [Google Scholar]
  • 7.Chang CH, Ni YC, Huang SY, et al. A geometric calibration method for the digital chest tomosynthesis with dual-axis scanning geometry. PLoS One. 2019;14(4):e0216054. doi: 10.1371/journal.pone.0216054. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Cho Y, Moseley DJ, Siewerdsen JH, et al. Accurate technique for complete geometric calibration of cone-beam computed tomography systems. Med Phys. 2005;32(4):968–983. doi: 10.1118/1.1869652. [DOI] [PubMed] [Google Scholar]
  • 9.Li G, Luo S, You C, et al. A novel calibration method incorporating nonlinear optimization and ball-bearing markers for cone-beam CT with a parameterized trajectory. Med Phys. 2019;46(1):152–164. doi: 10.1002/mp.13278. [DOI] [PubMed] [Google Scholar]
  • 10.Kingston A, Sakellariou A, Varslot T, et al. Reliable automatic alignment of tomographic projection data by passive auto-focus. Med Phys. 2011;38(9):4934–4945. doi: 10.1118/1.3609096. [DOI] [PubMed] [Google Scholar]
  • 11.Meng Y, Gong H, Yang X. Online geometric calibration of cone-beam computed tomography for arbitrary imaging objects. IEEE Trans Med Imaging. 2012;32(2):278–288. doi: 10.1109/TMI.2012.2224360. [DOI] [PubMed] [Google Scholar]
  • 12.Lakhani P. Deep convolutional neural networks for endotracheal tube position and X-ray image classification: challenges and opportunities. J Digit Imaging. 2017;30(4):460–468. doi: 10.1007/s10278-017-9980-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Oviedo F, Ren Z, Sun S, et al. Fast and interpretable classification of small X-ray diffraction datasets using data augmentation and deep neural networks. npj Comput Mater. 2019;5(1):1–9. doi: 10.1038/s41524-019-0196-x. [DOI] [Google Scholar]
  • 14.Souza JC, Diniz JOB, Ferreira JL, et al. An automatic method for lung segmentation and reconstruction in chest X-ray using deep neural networks. Comput Methods Programs Biomed. 2019;177:285–296. doi: 10.1016/j.cmpb.2019.06.005. [DOI] [PubMed] [Google Scholar]
  • 15.Lee KS, Kim JY, Jeon E, et al. Evaluation of scalability and degree of fine-tuning of deep convolutional neural networks for COVID-19 screening on chest x-ray images using explainable deep-learning algorithm. J Personal Med. 2020;10(4):213. doi: 10.3390/jpm10040213. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Apostolopoulos ID, Mpesiana TA. Covid-19: automatic detection from x-ray images utilizing transfer learning with convolutional neural networks. Phys Eng Sci Med. 2020 doi: 10.1007/s13246-020-00865-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Nguyen L. Tutorial on support vector machine. Appl Comput Math. 2017;6(4–1):1–15. [Google Scholar]
  • 18.Shi L, Liu B, Yu H, et al. Review of CT image reconstruction open source toolkits. J X-Ray Sci Technol. 2020;28:619–639. doi: 10.3233/XST-200666. [DOI] [PubMed] [Google Scholar]
  • 19.Lin C, Li L, Luo W, et al. Transfer learning based traffic sign recognition using inception-v3 model. Period Polytech Transp Eng. 2019;47(3):242–250. doi: 10.3311/PPtr.11480. [DOI] [Google Scholar]
  • 20.Alom MZ, Hasan M, Yakopcic C, et al. Improved inception-residual convolutional neural network for object recognition. Neural Comput Appl. 2020;32(1):279–293. doi: 10.1007/s00521-018-3627-6. [DOI] [Google Scholar]
  • 21.Wang C, Chen D, Hao L, et al. Pulmonary image classification based on inception-v3 transfer learning model[J] IEEE Access. 2019;7:146533–146541. doi: 10.1109/ACCESS.2019.2946000. [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Research data are not shared.


Articles from The Journal of Supercomputing are provided here courtesy of Nature Publishing Group

RESOURCES