An Intelligent Multi-View Active Learning Method Based on a Double-Branch Network
<p>Diagram of our proposed MALDB method. Each ‘outputs’ shown in the figure is used to calculate an uncertainty, and the average of all uncertainty is the final uncertainty score. In the diagram, we only draw one output (shown in yellow box) to calculate the uncertainty for simplicity.</p> "> Figure 2
<p>Example images of different datasets. (<b>a</b>) Fashion-MNIST [<a href="#B33-entropy-22-00901" class="html-bibr">33</a>], (<b>b</b>) CIFAR-10 [<a href="#B34-entropy-22-00901" class="html-bibr">34</a>], (<b>c</b>) SVHN [<a href="#B35-entropy-22-00901" class="html-bibr">35</a>], (<b>d</b>) Scene-15 [<a href="#B36-entropy-22-00901" class="html-bibr">36</a>], (<b>e</b>) UIUC-Sports [<a href="#B37-entropy-22-00901" class="html-bibr">37</a>].</p> "> Figure 3
<p>Test accuracy curve of different methods on Fashion-MNIST dataset.</p> "> Figure 4
<p>Test accuracy and standard deviation curve of different methods on Cifar-10 dataset.</p> "> Figure 5
<p>Test accuracy and standard deviation curve of different methods on SVHN dataset.</p> "> Figure 6
<p>Test accuracy and standard deviation curve of different methods on Scene-15 dataset.</p> "> Figure 7
<p>Test accuracy and standard deviation curve of different methods on UIUC-Sports dataset.</p> "> Figure 8
<p>F1-score of the 150th iteration obtained by different methods on Fashion-MNIST dataset.</p> "> Figure 9
<p>F1-score of the 150th iteration obtained by different methods on Cifar-10 dataset.</p> "> Figure 10
<p>F1-score of the 150th iteration obtained by different methods on SVHN dataset.</p> "> Figure 11
<p>F1-score of the 10th iteration obtained by different methods on Scene-15 dataset.</p> "> Figure 12
<p>F1-score of the 8th iteration obtained by different methods on UIUC-Sports dataset.</p> "> Figure 13
<p>The images with the largest uncertainty selected by different methods on SVHN dataset.</p> "> Figure 14
<p>Test accuracy curve of ablation experiment on Fashion-MNIST dataset.</p> "> Figure 15
<p>Test accuracy curve of ablation experiment on Cifar-10 dataset.</p> "> Figure 16
<p>Test accuracy curve of ablation experiment on SVHN dataset.</p> "> Figure 17
<p>Test accuracy curve of ablation experiment on Scene-15 dataset.</p> "> Figure 18
<p>Test accuracy curve of ablation experiment on UIUC-Sports dataset.</p> "> Figure 19
<p>Images with contrary uncertainty from the SVHN dataset.</p> ">
Abstract
:1. Introduction
2. Related Work
2.1. Active Learning Based on Uncertainty Criterion
2.2. Active Learning with Multiple Views
2.3. Motivation of Our Work
3. Multi-View Active Learning Based on Double-Branch Structure
3.1. Double-Branch Network Structure
3.2. Multi-View Sample Selection Strategy
3.3. Overall Algorithm
Algorithm 1. Multi-view active learning based on double-branch network |
Input: |
Xl, Xu, M0, n, ƒ, R, T, Oi,j {Xl is initial labeled dataset; Xu is unlabeled data; M0 is initial model; n is the number of softmax layers; ƒ is calculate the entropy of output using Equation (1); R is the number of unlabeled samples to be queried in each iteration; T is the total iteration number of the query; Oi,j is output of the hidden layer} |
Initialization: |
L0 = Xl, U0 = Xu |
Divide L0 into two parts: randomly initial training dataset Ltrain and validation dataset Lvalid |
1: for i = 0 … T-1 do |
2: add softmax layer to each hidden layer of each branch in M0 |
3: Mi+1 = train(Mi, Ltrain) |
4: for j = 1 … n-1 do |
5: compute loss l1, j, l2,j of each hidden layer in each branch by using Lvalid |
6: compute and using Equation (2) |
7: end for |
8: for xadd… Ui do |
9: compute score using Equations (3)–(4): |
, |
10: end for |
11: Label the R instances with largest score in Ui to form Qi |
12: update Li+1 = Li∪Qi and Ui+1 = Ui – Qi |
13: end for |
Output: |
MT-1: the final trained model |
4. Experiments
4.1. Datasets and Experimental Setup
4.1.1. Datasets
4.1.2. Experimental Setup
Models
Hyper Parameter
Environment
Baselines
4.2. Experimental Results and Analysis
4.3. Ablation Experiment
5. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Wang, B.; Kong, W.; Li, W.; Xiong, N.N. A dual-chaining watermark scheme for data integrity protection in Internet of Things. CMC Comput. Mater. Contin. 2019, 58, 679–695. [Google Scholar] [CrossRef] [Green Version]
- Wang, B.; Kong, W.; Guan, H.; Xiong, N.N. Air Quality Forecasting Based on Gated Recurrent Long Short Term Memory Model in Internet of Things. IEEE Access 2019, 7, 69524–69534. [Google Scholar] [CrossRef]
- Zhou, S.; Liang, W.; Li, J.; Kim, J.U. Improved VGG model for road traffic sign recognition. CMC Comput. Mater. Contin. 2018, 57, 11–24. [Google Scholar] [CrossRef]
- Donghwoon, K.; Natarajan, K.; Suh, S.C.; Kim, H.; Kim, J. An empirical study on network anomaly detection using convolutional neural networks. In Proceedings of the IEEE 38th International Conference on Distributed Computing Systems (ICDCS), Vienna, Austria, 2–5 July 2018; IEEE: New York, NY, USA, 2018. [Google Scholar]
- Zhang, C.; Zhang, H.; Qiao, J.; Yuan, D.; Zhang, M. Deep transfer learning for intelligent cellular traffic prediction based on cross-domain big data. IEEE J. Sel. Areas Commun. 2019, 37, 1389–1401. [Google Scholar] [CrossRef]
- Aceto, G.; Ciuonzo, D.; Montieri, A.; Pescapé, A. Mobile encrypted traffic classification using deep learning: Experimental evaluation, lessons learned, and challenges. IEEE Trans. Netw. Serv. Manag. 2019, 16, 445–458. [Google Scholar] [CrossRef]
- Zhang, C.; Fiore, M.; Patras, P. Multi-Service mobile traffic forecasting via convolutional long Short-Term memories. In Proceedings of the IEEE International Symposium on Measurements & Networking (M&N), Auckland, New Zealand, 20–23 May 2019; IEEE: New York, NY, USA, 2019; pp. 1–6. [Google Scholar]
- Aceto, G.; Ciuonzo, D.; Montieri, A.; Pescapè, A. MIMETIC: Mobile encrypted traffic classification using multimodal deep learning. Comput. Netw. 2019, 165, 106944. [Google Scholar] [CrossRef]
- Li, D.L.; Prasad, M.; Liu, C.L.; Lin, C.T. Multi-view vehicle detection based on fusion part model with active learning. IEEE Trans. Intell. Transp. Syst. 2020, 1–12, early access. [Google Scholar] [CrossRef]
- Jamshidpour, N.; Safari, A.; Homayouni, S. A GA-Based Multi-View, Multi-Learner Active Learning Framework for Hyperspectral Image Classification. Remote Sens. 2020, 12, 297. [Google Scholar] [CrossRef] [Green Version]
- Zheng, C.; Chen, J.; Kong, J.; Yi, Y.; Lu, Y.; Wang, J.; Liu, C. Scene Recognition via Semi-Supervised Multi-Feature Regression. IEEE Access 2019, 7, 121612–121628. [Google Scholar] [CrossRef]
- Wang, R.; Shen, M.; Li, Y.; Gomes, S. Multi-task joint sparse representation classification based on fisher discrimination dictionary learning. CMC Comput. Mater. Contin. 2018, 57, 25–48. [Google Scholar] [CrossRef]
- Zheng, C.; Zhang, F.; Hou, H.; Bi, C.; Zhang, M.; Zhang, B. Active discriminative dictionary learning for weather recognition. Math. Probl. Eng. 2016, 1–12. [Google Scholar] [CrossRef] [Green Version]
- Wang, D.; Shang, Y. A new active labeling method for deep learning. In Proceedings of the 2014 International Joint Conference on Neural Networks (IJCNN), Beijing, China, 6–11 July 2014; IEEE: New York, NY, USA, 2014. [Google Scholar]
- Sun, H.; McIntosh, S. Analyzing cross-domain transportation big data of New York City with semi-supervised and active learning. CMC Comput. Mater. Contin. 2018, 57, 1–9. [Google Scholar] [CrossRef]
- Zheng, C.; Yi, Y.; Qi, M.; Liu, F.; Bi, C.; Wang, J.; Kong, J. Multicriteria-based active discriminative dictionary learning for scene recognition. IEEE Access 2017, 6, 4416–4426. [Google Scholar] [CrossRef]
- Zhu, J.-J.; Bento, J. Generative adversarial active learning. arXiv 2017, arXiv:1702.07956. [Google Scholar]
- Sener, O.; Savarese, S. Active learning for convolutional neural networks: A core-set approach. arXiv 2017, arXiv:1708.00489. [Google Scholar]
- Zhang, Q.; Sun, S. Multiple-view multiple-learner active learning. Pattern Recognit. 2010, 43, 3113–3119. [Google Scholar] [CrossRef]
- Wang, K.; Zhang, D.; Li, Y.; Zhang, R.; Lin, L. Cost-effective active learning for deep image classification. IEEE Trans. Circuits Syst. Video Technol. 2016, 27, 2591–2600. [Google Scholar] [CrossRef] [Green Version]
- He, T.; Jin, X.; Ding, G.; Yi, L.; Yan, C. Towards Better Uncertainty Sampling: Active Learning with Multiple Views for Deep Convolutional Neural Network. In Proceedings of the 2019 IEEE International Conference on Multimedia and Expo (ICME), Shanghai, China, 8–12 July 2019; IEEE: New York, NY, USA, 2019. [Google Scholar]
- Tong, S.; Koller, D. Support vector machine active learning with applications to text classification. J. Mach. Learn. Res. 2001, 2, 45–66. [Google Scholar]
- Jain, P.; Kapoor, A. Active learning for large multi-class problems. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; IEEE: New York, NY, USA, 2009. [Google Scholar]
- Tuia, D.; Ratle, F.; Pacifici, F.; Kanevski, M.F.; Emery, W.J. Active Learning Methods for Remote Sensing Image Classification. IEEE Trans. Geosci. Remote Sens. 2009, 47, 2182–2232. [Google Scholar] [CrossRef]
- Gal, Y.; Islam, R.; Ghahramani, Z. Deep bayesian active learning with image data. In Proceedings of the 34th International Conference on Machine Learning, Sidney, Australia, 6–11 August 2017; PMLR: Cambridge, UK, 2017; Volume 70, pp. 1183–1192. [Google Scholar]
- Zhou, Z.; Shin, J.; Zhang, L.; Gurudu, S.; Gotway, M.; Liang, J. Fine-tuning convolutional neural networks for biomedical image analysis: actively and incrementally. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; IEEE: New York, NY, USA, 2017. [Google Scholar]
- Blum, A.; Mitchell, T. Combining labeled and unlabeled data with co-training. In Proceedings of the Eleventh Annual Conference on Computational Learning Theory, Madison, WI, USA, 24–26 July 1998. [Google Scholar]
- Muslea, I.; Minton, S.; Knoblock, C.A. Active learning with multiple views. J. Artif. Intell. Res. 2006, 27, 203–233. [Google Scholar]
- Yu, S.; Krishnapuram, B.; Rosales, R.; Rao, R.B. Bayesian co-training. J. Mach. Learn. Res. 2011, 12, 2649–2680. [Google Scholar]
- Wang, W.; Zhou, Z.-H. On multi-view active learning and the combination with semi-supervised learning. In Proceedings of the 25th International Conference on Machine Learning, Helsinki, Finland, 5–9 July 2008; ACM: New York, NY, USA, 2008. [Google Scholar]
- Huang, S.-J.; Zhao, J.-W.; Liu, Z.-Y. Cost-effective training of deep cnns with active model adaptation. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, London, UK, 19–23 August 2018; ACM: New York, NY, USA, 2018. [Google Scholar]
- Gal, Y.; Ghahramani, Z. Bayesian convolutional neural networks with Bernoulli approximate variational inference. arXiv 2015, arXiv:1506.02158. [Google Scholar]
- Xiao, H.; Rasul, K.; Vollgraf, R. Fashion-mnist: A novel image dataset for benchmarking machine learning algorithms. arXiv 2017, arXiv:1708.07747. [Google Scholar]
- Krizhevsky, A.; Hinton, G. Learning Multiple Layers of Features from Tiny Images; Technical Report TR-2009; University of Toronto: Toronto, ON, Canada, 2009. [Google Scholar]
- Netzer, Y.; Wang, T.; Coates, A.; Bissacco, A.; Wu, B.; Ng, A.Y. Reading digits in natural images with unsupervised feature learning. In Proceedings of the Neural Information Processing Systems (NIPS 2011), Granada, Spain, 16–17 December 2011. [Google Scholar]
- Lazebnik, S.; Schmid, C.; Ponce, J. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), New York, NY, USA, 17–22 June 2006; IEEE: New York, NY, USA, 2006; Volume 2, pp. 2169–2178. [Google Scholar]
- Li, L.J.; Fei-Fei, L. What, where and who? In classifying events by scene and object recognition. In Proceedings of the IEEE 11th International Conference on Computer Vision, Rio De Janeiro, Brazil, 14–21 October 2007; IEEE: New York, NY, USA, 2007; pp. 1–8. [Google Scholar]
- LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef] [Green Version]
- Jaderberg, M.; Simonyan, K.; Zisserman, A. Spatial transformer networks. In Advances in Neural Information Processing Systems; NIPS: La Jolla, CA, USA, 2015; pp. 2017–2025. [Google Scholar]
- Kipf, T.N.; Welling, M. Semi-supervised classification with graph convolutional networks. arXiv 2016, arXiv:1609.02907. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
Methods | 1 | 50 | 100 | 150 | |
---|---|---|---|---|---|
Iteration | |||||
BCNN-EN | 80.334 ± 0.570 | 88.130 ± 0.265 | 89.930 ± 0.203 | 90.662 ± 0.217 | |
AL-MV | 76.712 ± 1.369 | 86.768 ± 0.301 | 88.502 ± 0.327 | 89.022 ± 0.262 | |
Our model-RAND | 74.532 ± 1.021 | 86.528 ± 0.253 | 88.180 ± 0.260 | 88.850 ± 0.427 | |
CNN | 76.096 ± 0.362 | 88.194 ± 0.198 | 90.212 ± 0.193 | 90.812 ± 0.144 | |
MALDB | 75.122 ± 0.353 | 86.137 ± 0.085 | 87.804 ± 0.256 | 88.461 ± 0.243 | |
ALL | 91.500 ± 0.320 | 91.500 ± 0.320 | 91.500 ± 0.320 | 91.500 ± 0.320 |
Methods | 1 | 50 | 100 | 150 | |
---|---|---|---|---|---|
Iteration | |||||
BCNN-EN | 43.418 ± 0.401 | 72.787 ± 0.652 | 80.625 ± 0.363 | 85.204 ± 0.228 | |
AL-MV | 41.381 ± 0.453 | 71.069 ± 0.484 | 78.911 ± 0.353 | 84.356 ± 0.352 | |
Our model-RAND | 38.236 ± 1.319 | 75.310 ± 0.732 | 82.464 ± 0.477 | 86.098 ± 0.314 | |
CNN | 45.683 ± 0.371 | 72.087 ± 0.363 | 79.022 ± 0.237 | 83.238 ± 0.313 | |
MALDB | 39.242 ± 0.085 | 76.712 ± 0.566 | 84.704 ± 0.427 | 87.496 ± 0.188 | |
ALL | 90.020 ± 0.170 | 90.020 ± 0.170 | 90.020 ± 0.170 | 90.020 ± 0.170 |
Methods | 1 | 50 | 100 | 150 | |
---|---|---|---|---|---|
Iteration | |||||
BCNN-EN | 83.395 ± 0.877 | 91.313 ± 0.357 | 92.368 ± 0.181 | 92.804 ± 0.129 | |
AL-MV | 80.794 ± 0.837 | 89.377 ± 0.071 | 90.884 ± 0.186 | 91.443 ± 0.163 | |
Our model-RAND | 73.492 ± 0.817 | 92.133 ± 0.281 | 93.140 ± 0.165 | 93.273 ± 0.209 | |
CNN | 80.844 ± 0.324 | 89.094 ± 0.253 | 90.376 ± 0.281 | 90.873 ± 0.266 | |
MALDB | 73.363 ± 0.934 | 92.657 ± 0.346 | 93.413 ± 0.133 | 93.487 ± 0.103 | |
ALL | 93.723 ± 0.523 | 93.723 ± 0.523 | 93.723 ± 0.523 | 93.723 ± 0.523 |
Methods | 2 | 4 | 6 | 8 | 10 | |
---|---|---|---|---|---|---|
Iteration | ||||||
BCNN-EN | 68.766 ± 0.321 | 74.524 ± 0.562 | 78.744 ± 0.365 | 81.393 ± 0.268 | 82.612 ± 0.265 | |
AL-MV | 65.234 ± 0.413 | 70.991 ± 0.674 | 74.961 ± 0.535 | 78.429 ± 0.478 | 79.534 ± 0.301 | |
Our model-RAND | 72.563 ± 0.619 | 77.808 ± 0.522 | 81.312 ± 0.625 | 83.190 ± 0.236 | 84.121 ± 0.253 | |
CNN | 64.248 ± 0.417 | 69.522 ± 0.733 | 73.103 ± 0.423 | 75.673 ± 0.573 | 76.647 ± 0.198 | |
MALDB | 74.602 ± 0.266 | 80.392 ± 0.386 | 84.349 ± 0.465 | 86.217 ± 0.320 | 86.360 ± 0.085 | |
ALL | 87.82 ± 0.233 | 87.82 ± 0.233 | 87.82 ± 0.233 | 87.82 ± 0.233 | 87.82 ± 0.233 |
Methods | 2 | 4 | 6 | 8 | |
---|---|---|---|---|---|
Iteration | |||||
BCNN-EN | 69.339 ± 0.674 | 76.442 ± 0.522 | 79.876 ± 0.417 | 81.682 ± 0.625 | |
AL-MV | 64.352 ± 0.365 | 72.575 ± 0.321 | 77.454 ± 0.535 | 80.790 ± 0.478 | |
Our model-RAND | 68.125 ± 0.733 | 76.802 ± 0.562 | 81.312 ± 0.268 | 84.468 ± 0.573 | |
CNN | 66.125 ± 0.413 | 73.40 ± 0.619 | 77.065 ± 0.386 | 79.225 ± 0.236 | |
MALDB | 69.523 ± 0.465 | 79.475 ± 0.423 | 85.067 ± 0.266 | 86.816 ± 0.320 | |
ALL | 88.68 ± 0.414 | 88.68 ± 0.414 | 88.68 ± 0.414 | 88.68 ± 0.414 |
Methods | BCNN-EN | AL-MV | Our Model-RAND | CNN | MALDB | |
---|---|---|---|---|---|---|
Evaluate | ||||||
Recall | 0.9079 | 0.8918 | 0.8903 | 0.8909 | 0.9098 | |
Precision | 0.9083 | 0.8924 | 0.8902 | 0.8913 | 0.9102 |
Methods | BCNN-EN | AL-MV | Our Model-RAND | CNN | MALDB | |
---|---|---|---|---|---|---|
Evaluate | ||||||
Recall | 0.8509 | 0.8450 | 0.8632 | 0.8541 | 0.8733 | |
Precision | 0.8523 | 0.8472 | 0.8655 | 0.8564 | 0.8747 |
Methods | BCNN-EN | AL-MV | Our Model-RAND | CNN | MALDB | |
---|---|---|---|---|---|---|
Evaluate | ||||||
Recall | 0.9230 | 0.9070 | 0.9272 | 0.9171 | 0.9293 | |
Precision | 0.9221 | 0.9061 | 0.9269 | 0.9165 | 0.9295 |
Methods | BCNN-EN | AL-MV | Our Model-RAND | CNN | MALDB | |
---|---|---|---|---|---|---|
Evaluate | ||||||
Recall | 0.9370 | 0.9249 | 0.9420 | 0.9153 | 0.9524 | |
Precision | 0.9369 | 0.9272 | 0.9439 | 0.9180 | 0.9540 |
Methods | BCNN-EN | AL-MV | Our Model-RAND | CNN | MALDB | |
---|---|---|---|---|---|---|
Evaluate | ||||||
Recall | 0.8143 | 0.7961 | 0.8296 | 0.7828 | 0.8554 | |
Precision | 0.8219 | 0.8005 | 0.8319 | 0.7881 | 0.8565 |
Dataset | Fahsion-MNIST | CIFAR-10 | SVHN | Scene-15 | UIUC-Sports | |
---|---|---|---|---|---|---|
Methods | ||||||
BCNN-EN | 0.9080 | 0.8515 | 0.9225 | 0.9369 | 0.8180 | |
AL-MV | 0.8920 | 0.8460 | 0.9065 | 0.9260 | 0.7982 | |
Our model-RAND | 0.8902 | 0.8643 | 0.9270 | 0.9429 | 0.8307 | |
CNN | 0.8910 | 0.8552 | 0.9167 | 0.9166 | 0.7854 | |
MALDB | 0.9099 | 0.8739 | 0.9293 | 0.9531 | 0.8559 |
Dataset | Fahsion-MNIST | CIFAR-10 | SVHN | Scene-15 | UIUC-Sports | |
---|---|---|---|---|---|---|
Methods | ||||||
BCNN-EN | 0.149 m | 0.191 m | 0.191 m | 16.706 m | 74.050 m | |
AL-MV | 0.302 m | 1.770 m | 1.770 m | 121.225 m | 514.262 m | |
Our model-RAND | 0.548 m | 2.550 m | 2.550 m | 173.3 m | 735.869 m | |
CNN | 0.302 m | 1.770 m | 1.770 m | 121.225 m | 514.262 m | |
MALDB | 0.548 m | 2.550 m | 2.550 m | 173.3 m | 735.869 m |
Dataset | ||||||
---|---|---|---|---|---|---|
Avg. Epoch Time/Test Time | Fashion-MNIST | CIFAR-10 | SVHN | Scene-15 | UIUC-Sports | |
Methods | ||||||
BCNN-EN | 36.1s/0.201s | 42.0s/0.219s | 42.0s/0.219s | 556.0s/0.351s | 276.3s/0.498s | |
AL-MV | 58.9s/0.307s | 74.3s/0.387s | 74.3s/0.387s | 824.1s/0.511s | 488.8s/0.865s | |
Our model-RAND | 128.3s/0.611s | 132.5s/0.631s | 132.5s/0.631s | 1655.8s/0.880s | 990.6s/1.229s | |
CNN | 58.9s/0.307s | 78.5s/0.374s | 78.5s/0.374s | 824.1s/0.511s | 488.8s/0.865s | |
MALDB | 128.3s/0.611s | 145.9s/0.695s | 145.9s/0.695s | 1655.8s/0.880s | 990.6s/1.229s |
Methods | 1 | 50 | 100 | 150 | |
---|---|---|---|---|---|
Iteration | |||||
MALDB-EN | 74.014 ± 0.320 | 87.840 ± 0.347 | 89.974 ± 0.323 | 90.512 ± 0.211 | |
MALDB-CNN | 75.882 ± 0.728 | 87.253 ± 0.264 | 89.155 ± 0.204 | 89.888 ± 0.217 | |
MALDB | 76.096 ± 0.362 | 88.194 ± 0.198 | 90.212 ± 0.193 | 90.812 ± 0.144 |
Methods | 1 | 50 | 100 | 150 | |
---|---|---|---|---|---|
Iteration | |||||
MALDB-EN | 37.452 ± 0.437 | 76.388 ± 0.398 | 84.166 ± 0.560 | 86.542 ± 0.474 | |
MALDB-CNN | 38.183 ± 1.541 | 72.997 ± 0.479 | 80.782 ± 0.319 | 84.194 ± 0.307 | |
MALDB | 39.242 ± 0.085 | 76.712 ± 0.566 | 84.704 ± 0.427 | 87.496 ± 0.188 |
Methods | 1 | 50 | 100 | 150 | |
---|---|---|---|---|---|
Iteration | |||||
MALDB-EN | 71.848 ± 0.806 | 91.875 ± 0.174 | 92.655 ± 0.248 | 92.773 ±0.182 | |
MALDB-CNN | 74.735 ± 0.809 | 90.808 ±0.346 | 91.756 ± 0.287 | 91.984 ±0.230 | |
MALDB | 73.363 ± 0.934 | 92.657 ±0.533 | 93.413 ± 0.133 | 93.487 ± 0.103 |
Methods | 2 | 4 | 6 | 8 | 10 | |
---|---|---|---|---|---|---|
Iteration | ||||||
MALDB-EN | 73.843 ± 0.605 | 79.088 ± 0.103 | 82.592 ± 0.230 | 84.470 ± 0.533 | 85.401 ± 0.133 | |
MALDB-CNN | 70.939 ± 0.558 | 76.698 ± 0.230 | 80.917 ± 0.103 | 83.566 ± 0.346 | 84.785 ± 0.248 | |
MALDB | 74.602 ± 0.714 | 80.392 ± 0.182 | 84.349 ± 0.182 | 86.217 ± 0.174 | 86.360 ± 0.287 |
Methods | 2 | 4 | 6 | 8 | |
---|---|---|---|---|---|
Iteration | |||||
MALDB-EN | 69.285 ± 0.756 | 77.962 ± 0.103 | 82.541 ± 0.364 | 85.628 ± 0.127 | |
MALDB-CNN | 69.397 ± 0.695 | 77.233 ± 0.248 | 81.367 ± 0.230 | 83.873 ± 0.287 | |
MALDB | 69.523 ± 0.827 | 79.475 ± 0.519 | 85.067 ± 0.182 | 86.816 ± 0.174 |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, F.; Zhang, T.; Zheng, C.; Cheng, Y.; Liu, X.; Qi, M.; Kong, J.; Wang, J. An Intelligent Multi-View Active Learning Method Based on a Double-Branch Network. Entropy 2020, 22, 901. https://doi.org/10.3390/e22080901
Liu F, Zhang T, Zheng C, Cheng Y, Liu X, Qi M, Kong J, Wang J. An Intelligent Multi-View Active Learning Method Based on a Double-Branch Network. Entropy. 2020; 22(8):901. https://doi.org/10.3390/e22080901
Chicago/Turabian StyleLiu, Fucong, Tongzhou Zhang, Caixia Zheng, Yuanyuan Cheng, Xiaoli Liu, Miao Qi, Jun Kong, and Jianzhong Wang. 2020. "An Intelligent Multi-View Active Learning Method Based on a Double-Branch Network" Entropy 22, no. 8: 901. https://doi.org/10.3390/e22080901
APA StyleLiu, F., Zhang, T., Zheng, C., Cheng, Y., Liu, X., Qi, M., Kong, J., & Wang, J. (2020). An Intelligent Multi-View Active Learning Method Based on a Double-Branch Network. Entropy, 22(8), 901. https://doi.org/10.3390/e22080901