AF-SENet: Classification of Cancer in Cervical Tissue Pathological Images Based on Fusing Deep Convolution Features
<p>Cervical pathological tissue image, each with a resolution of 3456 × 4608 (RGB).</p> "> Figure 2
<p>Small-size cervical tissue biopsy image, each with a resolution of 200 × 200 (RGB) pixels.</p> "> Figure 3
<p>Feature vector extracted by deep model and Traditional image features (TIF).</p> "> Figure 4
<p>Diagram of the local binary pattern (LBP) descriptor calculation process.</p> "> Figure 5
<p>Framework diagram of the deep network convolutional feature fusion subnet based on ANOVA F-spectral embedding. The loss function of this subnet was the categorical cross-entropy function, the optimization algorithm was stochastic gradient descent, and the learning rate was: 0.1 for epochs 0–60, 0.01 for epochs 61–120, 0.001 for epochs 121–180, and 0.0001 for epochs 181 and above. In the <a href="#sensors-21-00122-f005" class="html-fig">Figure 5</a>, M represents the number of samples in the training set, and <span class="html-italic">n</span> represents the feature-length of each sample after dimension reduction and fusion. FC stands for Full Connection layer, BN stands for batch normalization layer, and <math display="inline"><semantics> <mrow> <mi>s</mi> </mrow> </semantics></math> is the length of the column in the depth feature matrix (the feature length of the sample, <math display="inline"><semantics> <mrow> <mi>n</mi> </mrow> </semantics></math> is the length of the sample.</p> "> Figure 6
<p>The framework of the proposed method of pathological cervical image classification is based on fusion deep network convolutional features.</p> "> Figure 7
<p>Analysis diagrams for deep convolutional features and traditional features of cervical cancer tissue images. (The method for calculating the weights in this figure is shown in Equation (21).) VGG19 means VGGNet19, InRes means Inception-Resnet, Resnet means ResNet50 V2, DenseNet means DenseNet121, and Inception means Inception V3.</p> "> Figure 8
<p>The classification accuracy curve of the VGGNet19 model.</p> "> Figure 9
<p>The classification accuracy curve of some individual models.</p> "> Figure 10
<p>The classification accuracy curves of the fused models.</p> "> Figure 11
<p>ROC curves of some models.</p> ">
Abstract
:1. Introduction
1.1. Background
1.2. Related Work
1.3. Aims of the Study
- (1)
- This research aims to solve the problems of missed detection and misdiagnosis caused by the high similarity of cervical pathological tissue images and reliance on the experience of pathologists and to solve the problem of low overall screening efficiency caused by large reading data.
- (2)
- The purpose of this research is to explore further the influence of fusion depth features of the four classification effects of tissue images of the new classification standard for cervical cancer tissue images.
2. Method
2.1. Dataset
2.1.1. New Classification Standards
2.1.2. Introduction to Image Features
2.2. Image Processing
Algorithm 1 Image Enhancement Processing |
Input: . |
Output: |
1 is the original image matrix, is the enhanced image matrix. |
2 FOR p = 1: s //s is the number of image samples |
3 Implement random cropping based on grayscale matching for : |
4 (i) Perform random cropping, according to Formulas (4) and (5): |
5 . |
6 (ii) Determine whether the following conditions are met: |
7 |
8 Randomly shift the randomly cropped image tensor according to Equation (6): |
9 |
10 Randomly rotate according to Formula (7): |
11 |
12 Randomly scale according to Formula (8): |
13 |
14 Randomly adjust the brightness of according to Equation (9): |
15 |
16 Normalize the enhanced, small-sized image according to Equation (10): |
17 |
18 END |
19 Return |
2.3. Fine-Tuned Transfer Model
2.4. Deep Convolution Feature Fusion Mechanism Based on Analysis of Variance-F Value-Spectral Embedding (AF-SENet)
Algorithm 2 ANOVA F-spectral Embedding |
Input: . |
Output: |
1 is the sample image feature matrix, is the selected and transformed sample image feature matrix. |
2 FOR i = 1: n //n is the dimension of a feature in the feature matrix of the image sample. |
3 Calculate the -value of each feature according to Formula (18). |
4 |
5 // is the sum of the f-values of the sample features, is the f-value of the i-th feature. |
6 END |
7 Calculate the value of each feature according to Formula (18), sort in descending order |
8 FOR i = 1: n |
9 IF (sum + = ) < 99.9% |
10 = () |
11 END |
12 END |
13 Transform into a graph representation using the affinity (adjacency) matrix representation. |
14 Construct an unnormalized Laplacian graph as and a normalized graph as . |
15 Perform eigenvalue decomposition on the Laplacian graph after performing the above treatment on . |
16 Return |
2.5. Feature Analysis
2.6. Evaluation Criteria
3. Results and Discussion
3.1. Experimental Conditions
3.2. Multitype Features and Fusion Analysis
3.3. Accuracy of Classification before and after Fusion
3.4. Model Evaluation
3.5. Comparison between the Optimal Model in this Paper and Traditional Machine Learning Methods under Different Characteristics
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- The American College of Obstetricians and Gynecologists. Cervical Cancer Screening and Prevention. Obstet. Gynecol. 2016, 168, 1–20. [Google Scholar]
- Kurman, R.J.; Carcangiu, M.L.; Herrington, C.S.; Young, R.H. WHO Classification of Tumours of Female Reproductive Organs, 4th ed.; IARC: Lyon, France, 2014; pp. 172–184. [Google Scholar]
- Liu, J.; Peng, Y.; Zhang, Y. A Fuzzy Reasoning Model for Cervical Intraepithelial Neoplasia Classification Using Temporal Grayscale Change and Textures of Cervical Images during Acetic Acid Tests. IEEE Access 2019, 7, 13536–13545. [Google Scholar] [CrossRef]
- Toliman, P.J.; Phillipsm, S.; de Jong, S.; O’Neill, T.; Tan, G.; Brotherton, J.M.L.; Saville, M.; Kaldor, J.M.; Vallely, A.J.; Tabrizi, S.N. Evaluation of p16/Ki-67 dual-stain cytology performed on self-collected vaginal and clinician-collected cervical specimens for the detection of cervical pre-cancer. Clin. Microbiol. Infect. 2020, 26, 748–752. [Google Scholar] [CrossRef] [PubMed]
- Liang, H.; Fu, M.; Zhou, J.; Song, L. Evaluation of 3D-CPA, HR-HPV, and TCT joint detection on cervical disease screening. Oncol. Lett. 2016, 12, 887–892. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Rivera, R.; Madera, J. Diagnostic accuracy of conventional cervical cytology (papanicolau smear), liquid based cytology and visual inspection with acetic acid in detecting premalig-nant and malignant cervical lesions among filipino women in a tertiary hospital. Int. J. Gynecol. Cancer 2019, 29, A569. [Google Scholar]
- Liu, Y.; Zhang, L.; Zhao, G.; Che, L.; Fang, J. The clinical research of Thinprep Cytology Test (TCT) combined with HPV-DNA detection in screening cervical cancer. Cell. Mol. Biol. 2017, 63, 92. [Google Scholar] [CrossRef] [PubMed]
- Bryan, K. Histopathological evaluation of colposcopic biopsies, LLETZ and cold knife cone biopsies of the uterine cervix in post-menopausal women: Considerations in the setting of the new Australian cervical HPV DNA screening program. Pa Thology 2019, 51, 752–755. [Google Scholar]
- Lu, Z.; Chen, J. Introduction of WHO classification of tumours of female reproductive organs, fourth edition. Chin. J. Pathol. 2014, 43, 649–650. [Google Scholar]
- Gurcan, M.N.; Boucheron, L.E.; Can, A.; Madabhushi, A.; Rajpoot, N.M.; Yener, B. Histopathological Image Analysis: A Review. IEEE Rev. Biomed. Eng. 2009, 2, 147–171. [Google Scholar] [CrossRef] [Green Version]
- Hinton, G.E.; Salakhutdinov, R.R. Reducing the dimensionality of data with neural networks. Science 2006, 313, 504–507. [Google Scholar] [CrossRef] [Green Version]
- Babenko, A.; Slesarev, A.; Chigorin, A.; Lempitsky, V. Neural Codes for Image Retrieval. In Computer Vision—Eccv 2014; Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T., Eds.; Lecture Notes in Computer Science; Springer: Berlin, Germany, 2014; Volume 8689, Pt I, pp. 584–599. [Google Scholar]
- Graves, A.; Liwicki, M.; Fernandez, S.; Bertolami, R.; Bunke, H.; Schmidhuber, R. A Novel Connectionist System for Un-constrained Handwriting Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 31, 855–868. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Hinton, G.E.; Osindero, S.; Teh, Y.-W. A Fast Learning Algorithm for Deep Belief Nets. Neural Comput. 2006, 18, 1527–1554. [Google Scholar] [CrossRef]
- LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef] [Green Version]
- Zeiler, M.D.; Fergus, R. Visualizing and Understanding Convolutional Networks. In Computer Vision—Eccv 2014; Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T., Eds.; Lecture Notes in Computer Science; Springer: Berlin, Germany, 2014; Volume 8689, Pt I, pp. 818–833. [Google Scholar]
- Deng, S.; Zhang, X.; Yan, W.; Chang, E.I.C.; Fan, Y.B.; Lai, M.D.; Xu, Y. Deep learning in digital pathology image analysis: A survey. Front. Med. 2020, 14, 470–487. [Google Scholar] [CrossRef]
- Krithiga, R.; Geetha, P. Breast Cancer Detection, Segmentation and Classification on Histopathology Images Analysis: A Systematic Review. Arch. Comput. Methods Eng. 2020, 1–13. [Google Scholar] [CrossRef]
- Lu, Z.X.; Xu, S.W.; Shao, W.; Wu, Y.; Zhang, J.; Han, Z.; Feng, Q.J.; Huang, K. Deep-Learning-Based Characterization of Tu-mor-Infiltrating Lymphocytes in Breast Cancers from Histopathology Images and Multiomics Data. JCO Clin. Cancer Form. 2020, 4, 480–490. [Google Scholar]
- Cong, L.; Feng, W.; Yao, Z.; Zhou, X.; Xiao, W. Deep Learning Model as a New Trend in Computer-aided Diagnosis of Tumor Pathology for Lung Cancer. J. Cancer 2020, 11, 3615–3622. [Google Scholar] [CrossRef] [PubMed]
- Nadeem, M.W.; Ghamdi, M.A.A.; Hussain, M.; Khan, M.A.; Khan, K.M.; Almotiri, S.H.; Butt, S.A. Brain Tumor Analysis Empowered with Deep Learning: A Review, Taxonomy, and Future Challenges. Brain Sci. 2020, 10, 33. [Google Scholar] [CrossRef] [Green Version]
- Xu, Y.; Jia, Z.; Wang, L.-B.; Ai, Y.; Zhang, F.; Lai, M.-D.; Chang, E.I.-C. Large scale tissue histopathology image classification, segmentation, and visualization via deep convolutional activation features. BMC Bioinform. 2017, 18, 1–17. [Google Scholar] [CrossRef] [Green Version]
- Kanavati, F.; Toyokawa, G.; Momosaki, S.; Rambeau, M.; Kozuma, Y.; Shoji, F.; Yamazaki, K.; Takeo, S.; Iizuka, O.; Tsuneki, M. Weakly-supervised learning for lung carcinoma classification using deep learning. Sci. Rep. 2020, 10, 1–11. [Google Scholar] [CrossRef]
- Wang, X.; Chen, H.; Gan, C.; Lin, H.; Dou, Q.; Tsougenis, E.; Huang, Q.; Cai, M.; Heng, P.-A. Weakly Supervised Deep Learning for Whole Slide Lung Cancer Image Analysis. IEEE Trans. Cybern. 2019, 50, 3950–3962. [Google Scholar] [CrossRef] [PubMed]
- Huang, P.; Zhang, S.; Li, M.; Wang, J.; Ma, C.; Wang, B.; Lv, X. Classification of Cervical Biopsy Images Based on LASSO and EL-SVM. IEEE Access 2020, 8, 24219–24228. [Google Scholar] [CrossRef]
- Wang, Y.; Crookes, D.; Eldin, O.S.; Wang, S.; Hamilton, P.; Diamond, J. Assisted Diagnosis of Cervical Intraepithelial Neoplasia (CIN). IEEE J. Sel. Top. Signal. Process. 2009, 3, 112–121. [Google Scholar] [CrossRef] [Green Version]
- Wei, L.; Gan, Q.; Ji, T. Cervical cancer histology image identification method based on texture and lesion area features. Comput. Assist. Surg. 2017, 22, 186–199. [Google Scholar] [CrossRef] [PubMed]
- Guo, P.; Banerjee, K.; Stanley, R.J.; Long, R.; Antani, S.; Thoma, G.; Zuna, R.; Frazier, S.R.; Moss, R.H.; Stoecker, W.V. Nuclei-Based Features for Uterine Cervical Cancer Histology Image Analysis With Fusion-Based Classification. IEEE J. Biomed. Health Inform. 2016, 20, 1595–1607. [Google Scholar] [CrossRef]
- Keenan, S.J.; Diamond, J.; McCluggage, W.G.; Bharucha, H.; Thompson, D.; Bartels, P.H.; Hamilton, P.W. An au-tomated machine vision system for the histological grading of cervical intraepithelial neoplasia (CIN). J. Pathol. 2020, 192, 351–362. [Google Scholar] [CrossRef]
- Mazo, C.; Alegre, E.; Trujillo, M. Classification of cardiovascular tissues using LBP based descriptors and a cascade SVM. Comput. Methods Programs Biomed. 2017, 147, 1–10. [Google Scholar] [CrossRef]
- Simon, O.; Yacoub, R.; Jain, S.; Tomaszewski, J.E.; Sarder, P. Multi-radial LBP Features as a Tool for Rapid Glomerular De-tection and Assessment in Whole Slide Histopathology Images. Sci. Rep. 2018, 8, 2032. [Google Scholar] [CrossRef]
- Ho, T.K.K.; Gwak, J. Multiple Feature Integration for Classification of Thoracic Disease in Chest Radiography. Appl. Sci. 2019, 9, 4130. [Google Scholar] [CrossRef] [Green Version]
- Sun, C.; Zhan, Y.K.; Chang, Q.; Liu, T.J.; Zhang, S.H.; Wang, X.; Guo, Q.Q.; Yao, J.P.; Sun, W.D.; Niu, L.J. Evaluation of a deep learning-based computer-aided diagnosis system for distinguishing benign from malignant thyroid nodules in ultrasound im-ages. Med. Phys. 2020, 47, 3952–3960. [Google Scholar] [CrossRef]
- Lai, X.; Xu, X.; Zhang, J.; Fang, Y.; Huang, Z. An Efficient Implementation of a Census-Based Stereo Matching and Its Applications in Medical Imaging. J. Med. Imaging Health Inf. 2019, 9, 1152–1159. [Google Scholar] [CrossRef]
- Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Li, F.F. Imagenet: A Large-Scale Hierarchical Image Database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
- Rusiecki, A. Trimmed categorical cross-entropy for deep learning with label noise. Electron. Lett. 2019, 55, 319. [Google Scholar] [CrossRef]
- Kafka, D.; Wilke, D.N. Resolving learning rates adaptively by locating stochastic non-negative associated gradient projection points using line searches. J. Glob. Optim. 2020. [Google Scholar] [CrossRef]
- Xu, T.; Feng, Z.-H.; Wu, X.-J.; Kittler, J. Learning adaptive discrimina-tive correlation _lters via temporal consistency pre-serving spatial feature selection for robust visual object tracking. IEEE Trans. Image Process. 2019, 28, 5596–5609. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Luo, Y.; Ji, R.; Guan, T.; Yu, J.; Liu, P.; Yang, Y. Every node counts: Self-ensembling graph convolutional networks for semi-supervised learning. Pattern Recognit. 2020, 106, 107451. [Google Scholar] [CrossRef]
- Alhichri, H.; Bazi, Y.; Alajlan, N.; Bin Jdira, B. Helping the Visually Impaired See via Image Multi-labeling Based on SqueezeNet CNN. Appl. Sci. 2019, 9, 4656. [Google Scholar] [CrossRef] [Green Version]
- Da Nóbrega, R.V.M.; Rebouçs, P.P.; Rodrigues, M.B.; da Silva, S.P.P.; Dourado, C.; de Albuquerque, V.H.C. Lung nodule malignancy classification in chest computed tomography images using transfer learning and convolutional neural networks. Neural Comput. Appl. 2020, 32, 11065–11082. [Google Scholar] [CrossRef]
- Jing, R.; Liu, S.; Gong, Z.; Wang, Z.; Guan, H.; Gautam, A.; Zhao, W. Object-based change detection for VHR remote sensing images based on a Trisiamese-LSTM. Int. J. Remote. Sens. 2020, 41, 6209–6231. [Google Scholar] [CrossRef]
- Kocev, D.; Ceci, M.; Stepišnik, T. Ensembles of extremely randomized predictive clustering trees for predicting structured outputs. Mach. Learn. 2020, 109, 2213–2241. [Google Scholar] [CrossRef]
Traditional | Version 3 | Version 4 |
---|---|---|
Mild atypical hyperplasia | CIN1 | LSIL |
Moderate atypical hyperplasia | CIN2 | HSIL |
Severe atypical hyperplasia | CIN3 | HSIL |
Carcinoma in situ | Carcinoma in situ | HSIL |
Numbering | Combined Content | Feature Length | Numbering | Combined Content | Feature Length |
---|---|---|---|---|---|
ResNet50 v2 | 2048 | 400 | |||
DenseNet121 | 1024 | 600 | |||
Inception v3 | 2048 | 600 | |||
Inception-ResNet | 1536 | 600 | |||
400 | 800 | ||||
400 | TIF | — | 1408 | ||
400 | VGG19 | — | 512 | ||
400 | LBP | — | 256 | ||
400 | HOG | — | 1152 |
Layer Name | Number of Neurons | Excitation Function | Regularization |
---|---|---|---|
Inputs | Feature dimension | ReLu | L1 |
FC1 | 4096 | ReLu | L1 |
BN1 | — | — | — |
FC2 | 4096 | ReLu | L1 |
BN2 | — | — | — |
Classification | 4 | Softmax | — |
Category | Name | Version |
---|---|---|
CPU | Intel Core I5 9600KF | — |
GPU | NVIDIA GTX1070 | — |
Deep learning framework | Keras | V2.3.1 |
Feature | Normal | LSIL | HSIL | Cancer | Accuracy |
---|---|---|---|---|---|
ResNet50 v2 | 94.69 | 88.01 | 94.36 | 96.84 | 94.26 |
DenseNet121 | 95.45 | 88.79 | 92.58 | 96.42 | 94.28 |
Inception v3 | 93.07 | 87.37 | 93.60 | 95.94 | 92.96 |
Inception-ResNet | 94.66 | 87.20 | 93.63 | 94.66 | 93.60 |
VGGNet19 | 83.67 | 53.21 | 73.34 | 82.91 | 78.37 |
TIF | 75.78 | 47.12 | 46.12 | 66.20 | 68.31 |
96.00 | 90.89 | 94.57 | 97.13 | 95.33 | |
94.45 | 88.32 | 93.01 | 95.87 | 93.75 | |
95.11 | 87.81 | 92.63 | 95.61 | 93.87 | |
94.64 | 88.77 | 93.02 | 95.95 | 93.91 | |
95.02 | 88.16 | 92.97 | 95.47 | 93.91 | |
95.39 | 89.36 | 93.72 | 96.45 | 94.57 | |
94.82 | 89.91 | 93.36 | 96.38 | 94.28 | |
95.08 | 88.32 | 93.17 | 95.50 | 94.01 | |
95.16 | 89.09 | 93.41 | 95.92 | 94.28 | |
95.18 | 89.06 | 93.53 | 96.08 | 94.33 |
Feature | Micro-AUC | Classifier | Micro-AUC |
---|---|---|---|
ResNet50 v2 | 0.9941 | 0.9891 | |
DenseNet121 | 0.9949 | 0.9871 | |
Inception v3 | 0.9891 | 0.9918 | |
Inception-ResNet | 0.9906 | 0.9898 | |
VGGNet19 | 0.9506 | 0.9877 | |
0.9959 | 0.9881 | ||
0.9866 | 0.9884 | ||
0.9847 | TIF | 0.8952 |
Classification Algorithm | TIF | |
---|---|---|
AF-SENet | 95.33% | 68.31% |
Random Forest | 94.42% | 63.54% |
Support Vector Machines | 94.88% | 65.18% |
k-Means | 93.62% | 86.77% |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Huang, P.; Tan, X.; Chen, C.; Lv, X.; Li, Y. AF-SENet: Classification of Cancer in Cervical Tissue Pathological Images Based on Fusing Deep Convolution Features. Sensors 2021, 21, 122. https://doi.org/10.3390/s21010122
Huang P, Tan X, Chen C, Lv X, Li Y. AF-SENet: Classification of Cancer in Cervical Tissue Pathological Images Based on Fusing Deep Convolution Features. Sensors. 2021; 21(1):122. https://doi.org/10.3390/s21010122
Chicago/Turabian StyleHuang, Pan, Xiaoheng Tan, Chen Chen, Xiaoyi Lv, and Yongming Li. 2021. "AF-SENet: Classification of Cancer in Cervical Tissue Pathological Images Based on Fusing Deep Convolution Features" Sensors 21, no. 1: 122. https://doi.org/10.3390/s21010122
APA StyleHuang, P., Tan, X., Chen, C., Lv, X., & Li, Y. (2021). AF-SENet: Classification of Cancer in Cervical Tissue Pathological Images Based on Fusing Deep Convolution Features. Sensors, 21(1), 122. https://doi.org/10.3390/s21010122