Double-Shot Transfer Learning for Breast Cancer Classification from X-Ray Images
<p>Examples of noisy and corrupted images. (<b>a</b>,<b>b</b>) contain some black arrow marks made by doctors or radiologists indicating the location of the lesion. (<b>c</b>) is a result of insufficient illumination or incorrect device adjustment.</p> "> Figure 2
<p>Datasets examples. Benign cases are shown in (<b>a</b>–<b>c</b>). Malignant cases are shown in (<b>d</b>–<b>f</b>). (<b>a</b>,<b>d</b>) represent samples from the curated breast imaging subset of digital database screening mammography (CBIS-DDSM) dataset, (<b>b</b>,<b>e</b>) represent samples from the mammographic image analysis society (MIAS) dataset, and (<b>c</b>,<b>f</b>) represent samples from the breast cancer digital repository (BCDR) dataset.</p> "> Figure 3
<p>Sketching the instances transferring in single-shot transfer learning (SSTL), where some instances from similar <math display="inline"><semantics> <msub> <mi mathvariant="script">D</mi> <mi>t</mi> </msub> </semantics></math> (CBIS-DDSM) are included in the <math display="inline"><semantics> <msub> <mi mathvariant="script">D</mi> <mi>s</mi> </msub> </semantics></math> (ImageNet) to update the weights in the pre-trained models.</p> "> Figure 4
<p>Sketching the instances transferring in double-shot transfer learning (DSTL), where some instances from the <math display="inline"><semantics> <msub> <mi mathvariant="script">D</mi> <mi>t</mi> </msub> </semantics></math> (MIAS or BCDR) are included in the <math display="inline"><semantics> <msub> <mi mathvariant="script">D</mi> <mi>s</mi> </msub> </semantics></math> (ImageNet and CBIS-DDSM). Note that SSTL has made the CBIS-DDSM instances part of the <math display="inline"><semantics> <msub> <mi mathvariant="script">D</mi> <mi>s</mi> </msub> </semantics></math>.</p> "> Figure 5
<p>The process of DSTL, where various pre-trained models were first fine-tuned on a large number of the augmented CBIS-DDSM dataset to update their weights and biases parameters. Second, the updated pre-trained models were fine-tuned on a new and similar dataset to classify between benign and malignant.</p> "> Figure 6
<p>The three fine-tuned layers for every pre-trained model.</p> "> Figure 7
<p>The receiver operating characteristic (ROC) curve of various pre-trained models with SSTL for breast cancer classification using the MIAS dataset.</p> "> Figure 8
<p>The ROC curve of various pre-trained models with DSTL for breast cancer classification using the MIAS dataset, where the AUC of each pre-trained model has been improved.</p> "> Figure 9
<p>The ROC curve of various pre-trained models with SSTL for breast cancer classification using the BCDR dataset.</p> "> Figure 10
<p>The ROC curve of various pre-trained models with DSTL for breast cancer classification using the BCDR dataset.</p> ">
Abstract
:1. Introduction
- 1
- An effective technique based on the concept of transfer learning, called double-shot transfer learning (DSTL), is introduced to improve the overall accuracy and performance of the pre-trained networks for breast cancer classification. This technique will make these pre-trained networks more suitable for medical image classification purposes. More importantly, DSTL can help speed up convergence significantly.
- 2
- DSTL can update the learnable parameters (weights and biases) of any pre-trained network by fine-tuning them on a large dataset that is similar, but not identical, to the target dataset. The proposed DSTL adds new instances (CBIS-DDSM) to the source domain () that are similar to the target domain () to update the weights of the parameters in the pre-trained models and form a distribution similar to the (MIAS and BCDR datasets).
- 3
- The number of X-ray images is enlarged by a combination of effective augmentation methods that are carefully chosen based on the most common image display functions performed by doctors and radiologists during the diagnostic image viewing. These augmentation methods include different variations of rotation, brightness, flipping, and contrast. These methods will reduce overfitting and produce robust results.
- 4
- The proposed DSTL will provide a valuable solution to the difference between the source and target domain problem in transfer learning.
2. Materials and Methods
2.1. Dataset Description
2.1.1. CBIS-DDSM Dataset
2.1.2. MIAS Dataset
2.1.3. BCDR Dataset
2.2. Pre-Trained Networks
2.2.1. AlexNet
2.2.2. GoogLeNet
2.2.3. VGG
2.2.4. MobileNet-v2
2.2.5. ResNet
2.2.6. ShuffleNet
Algorithm 1 Validation Accuracy Monitoring |
Input: Info, EpochsNumber |
Output: BestValAccuracy |
|
2.3. Double-Shot Transfer Learning
3. Execution Environment
4. Results
4.1. Sensitivity
4.2. Specificity
4.3. Accuracy
4.4. Receiver Operating Characteristic (ROC)
5. Conclusions
Author Contributions
Funding
Conflicts of Interest
Abbreviations
VGG | Visual Geometry Group |
ResNet | Residual Network |
SSTL | Single-Shot Transfer Learning |
DSTL | Double-Shot Transfer Learning |
CAD | Computer-Aided Diagnosis |
CNN | Convolutional Neural Network |
DDSM | Digital Database Screening Mammography |
CBIS-DDSM | Curated Breast Imaging Subset of DDSM |
MIAS | Mammographic Image Analysis Society |
BCDR | Breast Cancer Digital Repository |
ILSVRC | ImageNet Large-Scale Visual Recognition Challenge |
References
- Alkhaleefah, M.; Wu, C.C. A Hybrid CNN and RBF-Based SVM Approach for Breast Cancer Classification in Mammograms. In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (SMC), Miyazaki, Japan, 7–10 October 2018; pp. 894–899. [Google Scholar]
- Greenspan, H.; Van Ginneken, B.; Summers, R.M. Guest editorial deep learning in medical imaging: Overview and future promise of an exciting new technique. IEEE Trans. Med. Imaging 2016, 35, 1153–1159. [Google Scholar] [CrossRef]
- Ching, T.; Himmelstein, D.S.; Beaulieu-Jones, B.K.; Kalinin, A.A.; Do, B.T.; Way, G.P.; Ferrero, E.; Agapow, P.M.; Zietz, M.; Hoffman, M.M.; et al. Opportunities and obstacles for deep learning in biology and medicine. J. R. Soc. Interface 2018, 15, 20170387. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Lévy, D.; Jain, A. Breast mass classification from mammograms using deep convolutional neural networks. arXiv 2016, arXiv:1612.00542. [Google Scholar]
- Hussain, Z.; Gimenez, F.; Yi, D.; Rubin, D. Differential data augmentation techniques for medical imaging classification tasks. In Proceedings of the AMIA Annual Symposium, Washington, DC, USA, 4–8 November 2017; p. 979. [Google Scholar]
- Chen, Y.; Zhang, Q.; Wu, Y.; Liu, B.; Wang, M.; Lin, Y. Fine-Tuning ResNet for Breast Cancer Classification from Mammography. In Proceedings of the International Conference on Healthcare Science and Engineering, Guilin, China, 1–3 June 2018; pp. 83–96. [Google Scholar]
- Falconí, L.G.; Pérez, M.; Aguilar, W.G. Transfer Learning in Breast Mammogram Abnormalities Classification With Mobilenet and Nasnet. In Proceedings of the International Conference on Systems, Signals and Image Processing (IWSSIP), Osijek, Croatia, 5–7 June 2019; pp. 109–114. [Google Scholar]
- Huynh, B.Q.; Li, H.; Giger, M.L. Digital mammographic tumor classification using transfer learning from deep convolutional neural networks. J. Med. Imaging 2016, 3, 034501. [Google Scholar] [CrossRef] [PubMed]
- Vesal, S.; Ravikumar, N.; Davari, A.; Ellmann, S.; Maier, A. Classification of breast cancer histology images using transfer learning. In Proceedings of the International Conference Image Analysis and Recognition, Póvoa de Varzim, Portugal, 27–29 June 2018; pp. 812–819. [Google Scholar]
- Shan, H.; Wang, G.; Kalra, M.K.; de Souza, R.; Zhang, J. Enhancing transferability of features from pretrained deep neural networks for lung nodule classification. In Proceedings of the 2017 International Conference on Fully Three-Dimensional Image Reconstruction in Radiology and Nuclear Medicine, Xi’an, China, 18–23 June 2017; pp. 65–68. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classificationwith deep convolutional neural networks. In Proceedings of the 26th Conference on Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the 28th IEEE conference on computer vision and pattern recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar]
- Zhang, X.; Zhou, X.; Lin, M.; Sun, J. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 6848–6856. [Google Scholar]
- Heath, M.; Bowyer, K.; Kopans, D.; Moore, R.; Kegelmeyer, W.P. The digital database for screening mammography. In Proceedings of the 5th International Workshop on Digital Mammography, Toronto, ON, Canada, 11–14 June 2000; pp. 212–218. [Google Scholar]
- Lee, R.S.; Gimenez, F.; Hoogi, A.; Miyake, K.K.; Gorovoy, M.; Rubin, D. A curated mammography data set for use in computer-aided detection and diagnosis research. Sci. Data 2017, 4, 170177. [Google Scholar] [CrossRef] [PubMed]
- The Mini-MIAS Database of Mammograms. Available online: http://peipa.essex.ac.uk/info/mias.html (accessed on 1 January 2020).
- Suckling, J. The Mammographic Image Analysis Society Digital Mammogram Database. In 2nd International Workshop on Digital Mammography; Elsevier Science: Amsterdam, The Netherlands, 1994; pp. 375–378. [Google Scholar]
- Lopez, M.G.; Posada, N.; Moura, D.C.; Pollán, R.R.; Valiente, J.M.F.; Ortega, C.S.; Solar, M.; Diaz-Herrero, G.; Ramos, I.M.A.P.; Loureiro, J.; et al. BCDR: A breast cancer digital repository. In Proceedings of the 15th International Conference on Experimental Mechanics, Porto, Portugal, 22–27 July 2012; pp. 1–5. [Google Scholar]
- Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
- Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep feature extraction and classification of hyperspectral images based on convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef] [Green Version]
- Alkhaleefah, M.; Chittem, P.K.; Achhannagari, V.P.; Ma, S.C.; Chang, Y.L. The Influence of Image Augmentation on Breast Lesion Classification Using Transfer Learning. In Proceedings of the 2020 International Conference on Artificial Intelligence and Signal Processing (AISP), Amaravati, India, 10–12 January 2020; pp. 1–5. [Google Scholar]
- Zhuang, F.; Qi, Z.; Duan, K.; Xi, D.; Zhu, Y.; Zhu, H.; Xiong, H.; He, Q. A Comprehensive Survey on Transfer Learning. Available online: https://arxiv.org/pdf/1911.02685.pdf (accessed on 4 January 2020).
- Pan, S.J.; Yang, Q. A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 2009, 10, 1345–1359. [Google Scholar] [CrossRef]
- Wenyuan, D.; Yang, Q.; Xue, G.; Yu, Y. Boosting for transfer learning. In Proceedings of the 24th International Conference on Machine Learning, Corvallis, OR, USA, 20–24 June 2007; pp. 193–200. [Google Scholar]
- Baldi, P.; Brunak, S.; Chauvin, Y.; Andersen, C.A.; Nielsen, H. Assessing the accuracy of prediction algorithms for classification: An overview. Bioinformatics 2000, 16, 412–424. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Carroll, H.D.; Kann, M.G.; Sheetlin, S.L.; Spouge, J.L. Threshold Average Precision (TAP-k): A measure of retrieval designed for bio-informatics. Bioinformatics 2010, 26, 1708–1713. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Fawcett, T. An introduction to roc analysis. Pattern Recognit. Lett. 2006, 27, 861–874. [Google Scholar] [CrossRef]
- Felzenszwalb, P.F.; Girshick, R.B.; McAllester, D.; Ramanan, D. Object detection with discriminatively trained part-based models. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 1627–1645. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Raghu, M.; Zhang, C.; Kleinberg, J.; Bengio, S. Transfusion: Understanding transfer learning for medical imaging. In Proceedings of the Advances in Neural Information Processing Systems, Vancouver, BC, Canada, 8–14 December 2019; pp. 3342–3352. [Google Scholar]
Dataset | Original Samples | Rotation | Flipping | Brightness | Contrast | Total |
---|---|---|---|---|---|---|
CBIS-DDSM | 7277 | 50,939 | 14,554 | 43,662 | 14,554 | 130,986 |
MIAS | 114 | 798 | 228 | 684 | 228 | 2052 |
BCDR | 159 | 1113 | 318 | 954 | 318 | 2862 |
Class | Benign | Malignant | Total |
---|---|---|---|
Training samples | 54,523 | 44,444 | 98,967 |
Validation samples | 13,630 | 11,112 | 24,742 |
Testing samples | 4009 | 3268 | 7277 |
Total | 72,162 | 58,824 | 130,986 |
Class | Benign | Malignant | Total |
---|---|---|---|
Training samples | 857 | 694 | 1551 |
Validation samples | 214 | 173 | 387 |
Testing samples | 63 | 51 | 114 |
Total | 1134 | 918 | 2052 |
Class | Benign | Malignant | Total |
---|---|---|---|
Training samples | 1088 | 1074 | 2162 |
Validation samples | 272 | 269 | 541 |
Testing samples | 80 | 79 | 159 |
Total | 1440 | 1422 | 2862 |
Model | Depth | Size | Parameters (Millions) | Image Input Size |
---|---|---|---|---|
AlexNet | 8 | 227 MB | 61 | 227 × 227 |
GoogLeNet | 22 | 27 MB | 7 | 224 × 224 |
VGG-16 | 16 | 515 MB | 138 | 224 × 224 |
VGG-19 | 19 | 535 MB | 144 | 224 × 224 |
MobileNet-v2 | 53 | 13 MB | 3.5 | 224 × 224 |
ResNet-50 | 50 | 96 MB | 25.6 | 224 × 224 |
ResNet-101 | 101 | 167 MB | 44.6 | 224 × 224 |
ShuffleNet | 50 | 6.3 MB | 1.4 | 224 × 224 |
Training Options | Configuration |
---|---|
Optimizer | SGDM a |
Mini Batch Size | 40 |
Momentum Value | 0.9 |
Maximum Epochs | 100 |
Initial Learning Rate | 0.001 |
Execution Environment | GPU |
Learning Rate Schedule | Constant |
Dataset | Model | Val. Acc. | Testing Acc. | Specificity | Sensitivity | AUC |
---|---|---|---|---|---|---|
AlexNet | 90.18% | 81.92% | 86.65% | 76.11% | 89.24% | |
ShuffleNet | 93.85% | 89.28% | 92.51% | 85.30% | 95.46% | |
MobileNet-v2 | 94.50% | 92.03% | 93.39% | 90.35% | 96.20% | |
GoogleNet | 96.56% | 93.68% | 96.26% | 90.50% | 97.47% | |
CBIS-DDSM | ResNet-50 | 96.97% | 93.20% | 95.26% | 90.65% | 96.59% |
ResNet-101 | 97.09% | 93.47% | 95.88% | 90.50% | 97.57% | |
VGG-16 | 96.20% | 92.58% | 93.76% | 91.18% | 97.11% | |
VGG-19 | 95.12% | 90.93% | 93.02% | 88.36% | 96.65% |
Dataset & Technique | Model | Validation Acc. | Testing Acc. | Specificity | Sensitivity | AUC |
---|---|---|---|---|---|---|
AlexNet | 56.07% | 88.60% | 92.06% | 84.31% | 96.14% | |
ShuffleNet | 68.73% | 92.98% | 93.65% | 92.15% | 96.64% | |
MobileNet-v2 | 64.60% | 92.11% | 93.65% | 90.19% | 96.20% | |
MIAS | GoogLeNet | 60.21% | 88.60% | 95.24% | 80.39% | 98.41% |
(With SSTL) | ResNet-50 | 72.35% | 93.86% | 95.24% | 92.15% | 98.41% |
ResNet-101 | 70.80% | 91.23% | 95.24% | 86.27% | 97.60% | |
VGG-16 | 66.15% | 92.11% | 96.83% | 86.27% | 98.23% | |
VGG-19 | 57.88% | 90.35% | 90.48% | 90.19% | 97.45% | |
AlexNet+ | 61.50% | 92.11% | 95.24% | 88.24% | 96.92% | |
ShuffleNet+ | 80.88% | 96.49% | 96.82% | 96.07% | 99.44% | |
MobileNet-v2+ | 83.46% | 98.25% | 98.41% | 98.03% | 99.53% | |
MIAS | GoogLeNet+ | 86.30% | 96.49% | 98.41% | 94.11% | 99.69% |
(With DSTL) | ResNet-50+ | 82.43% | 95.61% | 96.82 % | 94.11% | 99.25% |
ResNet-101+ | 87.08% | 97.37% | 98.41% | 96.07% | 99.60% | |
VGG-16+ | 77.00% | 93.86% | 96.82% | 90.19% | 99.04% | |
VGG-19+ | 76.23% | 93.86% | 95.24% | 92.16% | 99.28% | |
AlexNet | 65.20% | 73.38% | 77.27% | 69.46% | 80.64% | |
ShuffleNet | 70.86% | 81.75% | 75.75% | 87.78% | 88.57% | |
MobileNet-v2 | 69.60% | 82.13% | 80.30% | 83.96% | 88.71% | |
BCDR | GoogLeNet | 74.00% | 83.65% | 83.33% | 83.96% | 91.59% |
(With SSTL) | ResNet-50 | 73.38% | 77.95% | 70.45% | 85.49% | 85.94% |
ResNet-101 | 77.78% | 81.37% | 77.27% | 85.49% | 89.90% | |
VGG-16 | 73.17% | 79.09% | 77.27% | 80.92% | 89.14% | |
VGG-19 | 81.97% | 84.41% | 82.57% | 86.25% | 91.99% | |
AlexNet+ | 77.36% | 82.13% | 83.33% | 80.91% | 93.20% | |
ShuffleNet+ | 81.97% | 87.83% | 86.36% | 89.31% | 94.74% | |
MobileNet-v2+ | 82.39% | 86.31% | 80.30% | 92.36% | 91.51% | |
BCDR | GoogLeNet+ | 87.84% | 88.21% | 86.36% | 90.07% | 95.65% |
(With DSTL) | ResNet-50+ | 83.65% | 87.07% | 84.09% | 90.07% | 93.92% |
ResNet-101+ | 82.81% | 87.07% | 81.06% | 93.13% | 94.29% | |
VGG-16+ | 81.76% | 87.97% | 90.91% | 87.02% | 94.52% | |
VGG-19+ | 86.79% | 89.11% | 89.39% | 90.84% | 94.57% |
Dataset & Technique | Model | Training-Time | Epoch | Iteration |
---|---|---|---|---|
AlexNet | 04 min 22 s | 48 | 1824 | |
ShuffleNet | 17 min 51 s | 38 | 1444 | |
MobileNet-v2 | 12 min 17 s | 20 | 760 | |
MIAS | GoogLeNet | 08 min 50 s | 29 | 1102 |
(With SSTL) | ResNet-50 | 11 min 30 s | 18 | 684 |
ResNet-101 | 37 min 17 s | 25 | 950 | |
VGG-16 | 09 min 55 s | 21 | 798 | |
VGG-19 | 25 min 13 s | 46 | 1748 | |
AlexNet+ | 02 min 43 s | 21 | 798 | |
ShuffleNet+ | 11 min 49 s | 28 | 1064 | |
MobileNet-v2+ | 11 min 35 s | 19 | 722 | |
MIAS | GoogLeNet+ | 08 min 08 s | 25 | 950 |
(With DSTL) | ResNet-50+ | 09 min 54 s | 16 | 608 |
ResNet-101+ | 31 min 23 s | 21 | 798 | |
VGG-16+ | 08 min 22 s | 18 | 684 | |
VGG-19+ | 08 min 48 s | 15 | 570 | |
AlexNet | 02 min 45 s | 42 | 1974 | |
ShuffleNet | 10 min 30 s | 30 | 1410 | |
MobileNet-v2 | 09 min 51 s | 23 | 1081 | |
BCDR | GoogLeNet | 05 min 51 s | 28 | 1316 |
(With SSTL) | ResNet-50 | 21 min 37 s | 42 | 1974 |
ResNet-101 | 13 min 38 s | 14 | 658 | |
VGG-16 | 11 min 56 s | 21 | 987 | |
VGG-19 | 28 min 11 s | 43 | 2021 | |
AlexNet+ | 02 min 40 s | 41 | 1927 | |
ShuffleNet+ | 05 min 02 s | 16 | 752 | |
MobileNet-v2+ | 07 min 13 s | 17 | 799 | |
BCDR | GoogLeNet+ | 03 min 32 s | 17 | 799 |
(With DSTL) | ResNet-50+ | 15 min 13 s | 30 | 1410 |
ResNet-101+ | 11 min 04 s | 11 | 517 | |
VGG-16+ | 09 min 30 s | 15 | 705 | |
VGG-19+ | 08 min 27 s | 13 | 611 |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Alkhaleefah, M.; Ma, S.-C.; Chang, Y.-L.; Huang, B.; Chittem, P.K.; Achhannagari, V.P. Double-Shot Transfer Learning for Breast Cancer Classification from X-Ray Images. Appl. Sci. 2020, 10, 3999. https://doi.org/10.3390/app10113999
Alkhaleefah M, Ma S-C, Chang Y-L, Huang B, Chittem PK, Achhannagari VP. Double-Shot Transfer Learning for Breast Cancer Classification from X-Ray Images. Applied Sciences. 2020; 10(11):3999. https://doi.org/10.3390/app10113999
Chicago/Turabian StyleAlkhaleefah, Mohammad, Shang-Chih Ma, Yang-Lang Chang, Bormin Huang, Praveen Kumar Chittem, and Vishnu Priya Achhannagari. 2020. "Double-Shot Transfer Learning for Breast Cancer Classification from X-Ray Images" Applied Sciences 10, no. 11: 3999. https://doi.org/10.3390/app10113999
APA StyleAlkhaleefah, M., Ma, S. -C., Chang, Y. -L., Huang, B., Chittem, P. K., & Achhannagari, V. P. (2020). Double-Shot Transfer Learning for Breast Cancer Classification from X-Ray Images. Applied Sciences, 10(11), 3999. https://doi.org/10.3390/app10113999